Replaced and initialised HDD. RAID no longer intact

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Wed Oct 15, 2025 6:23 am

hi

here are the outputs

Code: Select all
root@LacieNAS:/ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>

root@LacieNAS:/ # mdadm --examine /dev/sd[bcde]8
/dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=544 sectors
          State : clean
    Device UUID : 80878190:f91ebe07:76c039a6:0681fba7

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 38c9640a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808232816 (3723.26 GiB 3997.82 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808233080 sectors
   Unused Space : before=0 sectors, after=2656 sectors
          State : clean
    Device UUID : 4e087259:77fa272d:62e47d0c:bebe9526

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : c02327a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdd8.
/dev/sde8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=544 sectors
          State : clean
    Device UUID : e3ba3721:87210821:cca42baf:37a52640

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 83e6a284 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
root@LacieNAS:/ #


thank you
boerie
Donator VIP
Donator VIP
 
Posts: 15
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Wed Oct 15, 2025 7:44 am

Hi

So sdc8 seem to be clean :thumbup
Note: you got this behavior because you did not use the regular way to build the raid : previously sdc8 was formatted to be a single volume and its fs had not be destroyed before including it in the raid. So when the firmware failed to assemble the raid, it set a volume Vol-C as it detects also a fs on it.

With 3 raid members, you should be able to assemble the raid. So do
Code: Select all
mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 missing /dev/sde8

Post
Code: Select all
cat /proc/mdstat
Jocko
Site Admin - expert
 
Posts: 11547
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 6:28 am

Unfortunately I get this message

root@LacieNAS:/ # mdadm --assemble --force /dev/md
0 /dev/sdb8 /dev/sdc8 missing /dev/sde8
mdadm: cannot open device missing: No such file or directory
mdadm: missing has no superblock - assembly aborted
root@LacieNAS:/ #
boerie
Donator VIP
Donator VIP
 
Posts: 15
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 16, 2025 7:16 am

Sorry missing must be used only when you create a raid...
So do
Code: Select all
 mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 /dev/sde8
Jocko
Site Admin - expert
 
Posts: 11547
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 1:59 pm

root@LacieNAS:/ # mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 dev/sde8
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md0 has been started with 3 drives (out of 4).
root@LacieNAS:/ #

is this correct?

thank you
boerie
Donator VIP
Donator VIP
 
Posts: 15
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 16, 2025 2:35 pm

Yes that seems correct, you should have a raid now.

Please to post
Code: Select all
cat /proc/mdstat
mdadm --detail /dev/md0


later we go to add your new sdd disk
Jocko
Site Admin - expert
 
Posts: 11547
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 6:38 pm

Hello

Code: Select all
root@LacieNAS:/ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb8[0] sde8[3] sdc8[1]
      11712345600 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>
root@LacieNAS:/ #

root@LacieNAS:/ # mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 3904115200 (3723.25 GiB 3997.81 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct 11 12:54:00 2025
          State : active, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : LacieNAS.local:0  (local to host LacieNAS.local)
           UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       56        0      active sync   /dev/sdb8
       1       8       40        1      active sync   /dev/sdc8
       4       0        0        4      removed
       3       8        8        3      active sync   /dev/sde8
root@LacieNAS:/ #


Thank you
boerie
Donator VIP
Donator VIP
 
Posts: 15
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Fri Oct 17, 2025 7:51 am

Hi

So still good

Currently you have a bitmap over your raid which may impact speed performance. mdadm had to create it when you got a faulty disk.
So to remove it do
Code: Select all
mdadm --grow --bitmap=none /dev/md0
check if the line "bitmap: 0/30 pages [0KB], 65536KB chunk" disappeared in the output of mdstat.

Now we go to restore its redundancy :

Note I assume sdd8 is not mounted. If it is the case do
Code: Select all
umount /dev/sdd8
mount
no /dev/sdd8 device should be mounted and listed in mount output

First step is to make a small change on the partition table : change the partition type on sdd8. Follow carefully this sequence :
  1. Open gdisk interface command :
    Code: Select all
    gdisk /dev/sdd
  2. Then make these actions :
    • Select t as command and press Enter
    • Enter fd00 as new partition type and press Enter
    • Enter w and press Enter, then confirm your action (y) until you exit the gdisk command interface
    • Check if your partition command is correct
      Code: Select all
      gdisk -l /dev/sdd
      you should get a line like this for partition 8 with the code FD00 as partition type
      Code: Select all
      Number  Start (sector)    End (sector)  Size       Code  Name
         8            4096      5860533134   2.7 TiB     FD00  Linux RAID

Note I assume md0 is not mounted. If it is the case do
Code: Select all
umount /dev/md0
mount
no /dev/md0 device should be mounted and listed in mount output

Add now sdd8 as raid member of md0
Code: Select all
mdadm /dev/md0 -a /dev/sdd8

Check if synchronization is running to get again the redundancy on md0
Code: Select all
cat /proc/mdstat

Then wait for the synchronization for completing to restart the nas (several hours :tapfoot )

Note : at this step we are nearly to restore fully the firmware features
Jocko
Site Admin - expert
 
Posts: 11547
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Previous

Return to Lacie 5Big Network vs1

Who is online

Users browsing this forum: No registered users and 1 guest