Raid clean, degraded completed, estimated time ...

Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Mon Jan 02, 2023 3:34 pm

Hi, i notice that in my Disk setting page there is a notice about my raid status:

https://snipboard.io/IoXQMq.jpg

Then if i enter in the Raid setting, i see that the disk-e is missing... this is a bad notice? what i have to do? or what can i do? i see that status of 4/5 disks is "in_sync" but i don't know what meaning and if is there a end? What about disk 5? :dontknow

https://snipboard.io/MtaNqs.jpg

Anyway... i enter in the web interface and in the SMB folder with no problems for now. I would like to prevent any destruptive consequence... so, i'm in your hands. Thanks.
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm

Re: Raid clean, degraded completed, estimated time ...

Postby Jocko » Tue Jan 03, 2023 6:11 pm

Hi

Indeed there are some errors with the disk E (partition /dev/sde8). So post those outputs
Code: Select all
gdisk -l /dev/sde
cat /proc/partitions|grep sd
cat /proc/mdstat
mdadm --detail --scan
mdadm --detail /dev/md0
mdadm --examine /dev/sd[abcde]8
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Wed Jan 04, 2023 9:52 am

Jocko wrote:Hi

Indeed there are some errors with the disk E (partition /dev/sde8). So post those outputs
Code: Select all
gdisk -l /dev/sde
cat /proc/partitions|grep sd
cat /proc/mdstat
mdadm --detail --scan
mdadm --detail /dev/md0
mdadm --examine /dev/sd[abcde]8


Hi jocko! thanks for help, here the post:

Code: Select all
root@Norman_Nas2:/ # gdisk -l /dev/sde
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 4062 sectors (2.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            4096         1052671   512.0 MiB   8300  Linux filesystem
   2         1052672         2101247   512.0 MiB   8300  Linux filesystem
   3         2101248         3149823   512.0 MiB   8200  Linux swap
   4         3149824         3166207   8.0 MiB     8300  Linux filesystem
   5         3166208         4739071   768.0 MiB   8300  Linux filesystem
   6         4739072         4755455   8.0 MiB     8300  Linux filesystem
   7         4755456         5804031   512.0 MiB   8300  Linux filesystem
   8         5804032      3907029134   1.8 TiB     FD00  Linux RAID


Code: Select all
root@Norman_Nas2:/ # cat /proc/partitions|grep sd
   8        0 1953514584 sdc
   8        1     514080 sdc1
   8        2     514080 sdc2
   8        3     514080 sdc3
   8        4          1 sdc4
   8        5     835380 sdc5
   8        6      64260 sdc6
   8        7     514080 sdc7
   8        8 1950226740 sdc8
   8       16 1953514584 sdb
   8       20          1 sdb4
   8       21     835380 sdb5
   8       22      64260 sdb6
   8       23     514080 sdb7
   8       24 1950226740 sdb8
   8       32 1953514584 sda
   8       33     514080 sda1
   8       34     514080 sda2
   8       35     514080 sda3
   8       36          1 sda4
   8       37     835380 sda5
   8       38      64260 sda6
   8       39     514080 sda7
   8       40 1950226740 sda8
   8       48 1953514584 sde
   8       49     524288 sde1
   8       50     524288 sde2
   8       51     524288 sde3
   8       52       8192 sde4
   8       53     786432 sde5
   8       54       8192 sde6
   8       55     524288 sde7
   8       56 1950612551 sde8
   8       64 1953514584 sdd
   8       68          1 sdd4
   8       69     835380 sdd5
   8       70      64260 sdd6
   8       71     514080 sdd7
   8       72 1950226740 sdd8
root@Norman_Nas2:/ #


Code: Select all
root@Norman_Nas2:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sda8[5] sdd8[3] sdc8[2] sdb8[1]
      7800905728 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
      bitmap: 6/15 pages [24KB], 65536KB chunk

unused devices: <none>
root@Norman_Nas2:/ #


Code: Select all
root@Norman_Nas2:/ # mdadm --detail --scan
ARRAY /dev/md0 metadata=1.0 name=Norman_Nas2.local:0 UUID=90bef0c3:0666220d:c5b71016:497cace0
root@Norman_Nas2:/ #


Code: Select all
root@Norman_Nas2:/ # mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 1950226432 (1859.88 GiB 1997.03 GB)
   Raid Devices : 5
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Jan  4 10:48:02 2023
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
           UUID : 90bef0c3:0666220d:c5b71016:497cace0
         Events : 74735

    Number   Major   Minor   RaidDevice State
       5       8       40        0      active sync   /dev/sda8
       1       8       24        1      active sync   /dev/sdb8
       2       8        8        2      active sync   /dev/sdc8
       3       8       72        3      active sync   /dev/sdd8
       8       0        0        8      removed
root@Norman_Nas2:/ #


Code: Select all
root@Norman_Nas2:/ # mdadm --examine /dev/sd[abcde]8
/dev/sda8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 90bef0c3:0666220d:c5b71016:497cace0
           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
   Unused Space : before=0 sectors, after=584 sectors
          State : clean
    Device UUID : 4555d5c8:fdbbf46c:824a2ed9:21d0386b

Internal Bitmap : -16 sectors from superblock
    Update Time : Wed Jan  4 10:48:02 2023
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : ae47faa6 - correct
         Events : 74735

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 90bef0c3:0666220d:c5b71016:497cace0
           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
   Unused Space : before=0 sectors, after=584 sectors
          State : clean
    Device UUID : edb4b8a7:a631c4cc:bcdf7e5b:0f9c1ef6

Internal Bitmap : -16 sectors from superblock
    Update Time : Wed Jan  4 10:48:02 2023
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : fa31311b - correct
         Events : 74735

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 90bef0c3:0666220d:c5b71016:497cace0
           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
   Unused Space : before=0 sectors, after=584 sectors
          State : clean
    Device UUID : 0001bf52:62897431:dc133f41:f7391d64

Internal Bitmap : -16 sectors from superblock
    Update Time : Wed Jan  4 10:48:02 2023
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 5da6a6f2 - correct
         Events : 74735

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 90bef0c3:0666220d:c5b71016:497cace0
           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
   Unused Space : before=0 sectors, after=584 sectors
          State : clean
    Device UUID : 517d6999:41607181:397be566:178210eb

Internal Bitmap : -16 sectors from superblock
    Update Time : Wed Jan  4 10:48:02 2023
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : a0e7a9a1 - correct
         Events : 74735

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 90bef0c3:0666220d:c5b71016:497cace0
           Name : Norman_Nas2.local:0  (local to host Norman_Nas2.local)
  Creation Time : Mon Jan  1 13:08:42 2018
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3901224816 (1860.25 GiB 1997.43 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3901225080 sectors
   Unused Space : before=0 sectors, after=772200 sectors
          State : clean
    Device UUID : 2140a7bc:d393595c:bccbfe36:b4ddd6ff

Internal Bitmap : -16 sectors from superblock
    Update Time : Tue Dec 20 16:43:13 2022
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 83f06386 - correct
         Events : 74317

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
root@Norman_Nas2:/ #
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm

Re: Raid clean, degraded completed, estimated time ...

Postby Jocko » Wed Jan 04, 2023 1:50 pm

Hi

So it seems there is no issue with sde disk (partition table and it is handled rightly by the OS). On the raid side, it is not detected as a faulty disk :thumbup. Everything seems like the disk was missing when the firmware re-assembled the raid on nas booting :scratch as if it was not yet been brought up at this step. (supply issue ?)

Then try to re-add the disk :
Code: Select all
mdadm /dev/md0 -a /dev/sde8
or because the events number on /dev/sde8 is a few too offset you should use the option force
Code: Select all
mdadm /dev/md0 -a /dev/sde8 --force
(this may explain why the disk sde is no longer added when your reboot the nas)

So check if the raid is resynchronizing
Code: Select all
cat /proc/mdstat
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Wed Jan 04, 2023 2:01 pm

Jocko wrote:Hi

So it seems there is no issue with sde disk (partition table and it is handled rightly by the OS). On the raid side, it is not detected as a faulty disk :thumbup. Everything seems like the disk was missing when the firmware re-assembled the raid on nas booting :scratch as if it was not yet been brought up at this step. (supply issue ?)

Then try to re-add the disk :
Code: Select all
mdadm /dev/md0 -a /dev/sde8
or because the events number on /dev/sde8 is a few too offset you should use the option force
Code: Select all
mdadm /dev/md0 -a /dev/sde8 --force
(this may explain why the disk sde is no longer added when your reboot the nas)

So check if the raid is resynchronizing
Code: Select all
cat /proc/mdstat


Thanks jokco, sems the first command has going well without needed of --force command:

Code: Select all
root@Norman_Nas2:/ # mdadm /dev/md0 -a /dev/sde8
mdadm: re-added /dev/sde8
Code: Select all
root@Norman_Nas2:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sde8[4] sda8[5] sdd8[3] sdc8[2] sdb8[1]
      7800905728 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
      [=========>...........]  recovery = 47.7% (931130240/1950226432) finish=867.6min speed=19576K/sec
      bitmap: 4/15 pages [16KB], 65536KB chunk

unused devices: <none>


if i have understand well... it is recovering the raid (see 47.7%), ins't it? ...translated: sems a good notice :P , right?
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm

Re: Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Wed Jan 04, 2023 2:12 pm

last command after many minutes:

Code: Select all
root@Norman_Nas2:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sde8[4] sda8[5] sdd8[3] sdc8[2] sdb8[1]
      7800905728 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>


Sems to be all OK!! thank you jocko!! From web page too sems to be ok:
https://snipboard.io/ReVrND.jpg
Do you know the why this thing has happened?
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm

Re: Raid clean, degraded completed, estimated time ...

Postby Jocko » Wed Jan 04, 2023 2:19 pm

Ok

So that seems good :thumbup Now we go to remove bitmap setting over your raid (it is this line which occurred the wrong raid state)
Code: Select all
mdadm --grow --bitmap=none /dev/md0
(this may take a long time) and again
Code: Select all
cat /proc/mdstat
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Wed Jan 04, 2023 2:27 pm

Jocko wrote:Ok
So that seems good :thumbup Now we go to remove bitmap setting over your raid (it is this line which occurred the wrong raid state)
Code: Select all
mdadm --grow --bitmap=none /dev/md0
(this may take a long time) and again
Code: Select all
cat /proc/mdstat


mm... the first command sems to be instant.
Code: Select all
root@Norman_Nas2:/ # mdadm --grow --bitmap=none /dev/md0
root@Norman_Nas2:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sda8[5] sde8[4] sdd8[3] sdc8[2] sdb8[1]
      7800905728 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]

unused devices: <none>
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm

Re: Raid clean, degraded completed, estimated time ...

Postby Jocko » Wed Jan 04, 2023 2:38 pm

So all seems ok now. You can try now to restart the nas and see if the raid is re-assembled again rightly
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Raid clean, degraded completed, estimated time ...

Postby Draftmancorp » Wed Jan 04, 2023 3:24 pm

Jocko wrote:So all seems ok now. You can try now to restart the nas and see if the raid is re-assembled again rightly


Awesome! everything is turning good as before. Thank you jocko!
Draftmancorp
Donator VIP
Donator VIP
 
Posts: 526
Joined: Thu May 02, 2013 1:55 pm


Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 1 guest

cron