Replaced and initialised HDD. RAID no longer intact

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Wed Oct 15, 2025 6:23 am

hi

here are the outputs

Code: Select all
root@LacieNAS:/ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>

root@LacieNAS:/ # mdadm --examine /dev/sd[bcde]8
/dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=544 sectors
          State : clean
    Device UUID : 80878190:f91ebe07:76c039a6:0681fba7

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 38c9640a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808232816 (3723.26 GiB 3997.82 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808233080 sectors
   Unused Space : before=0 sectors, after=2656 sectors
          State : clean
    Device UUID : 4e087259:77fa272d:62e47d0c:bebe9526

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : c02327a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on /dev/sdd8.
/dev/sde8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=544 sectors
          State : clean
    Device UUID : e3ba3721:87210821:cca42baf:37a52640

Internal Bitmap : -24 sectors from superblock
    Update Time : Sat Oct 11 12:54:00 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 83e6a284 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
root@LacieNAS:/ #


thank you
boerie
Donator VIP
Donator VIP
 
Posts: 26
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Wed Oct 15, 2025 7:44 am

Hi

So sdc8 seem to be clean :thumbup
Note: you got this behavior because you did not use the regular way to build the raid : previously sdc8 was formatted to be a single volume and its fs had not be destroyed before including it in the raid. So when the firmware failed to assemble the raid, it set a volume Vol-C as it detects also a fs on it.

With 3 raid members, you should be able to assemble the raid. So do
Code: Select all
mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 missing /dev/sde8

Post
Code: Select all
cat /proc/mdstat
Jocko
Site Admin - expert
 
Posts: 11576
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 6:28 am

Unfortunately I get this message

root@LacieNAS:/ # mdadm --assemble --force /dev/md
0 /dev/sdb8 /dev/sdc8 missing /dev/sde8
mdadm: cannot open device missing: No such file or directory
mdadm: missing has no superblock - assembly aborted
root@LacieNAS:/ #
boerie
Donator VIP
Donator VIP
 
Posts: 26
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 16, 2025 7:16 am

Sorry missing must be used only when you create a raid...
So do
Code: Select all
 mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 /dev/sde8
Jocko
Site Admin - expert
 
Posts: 11576
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 1:59 pm

root@LacieNAS:/ # mdadm --assemble --force /dev/md0 /dev/sdb8 /dev/sdc8 dev/sde8
mdadm: failed to get exclusive lock on mapfile - continue anyway...
mdadm: /dev/md0 has been started with 3 drives (out of 4).
root@LacieNAS:/ #

is this correct?

thank you
boerie
Donator VIP
Donator VIP
 
Posts: 26
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 16, 2025 2:35 pm

Yes that seems correct, you should have a raid now.

Please to post
Code: Select all
cat /proc/mdstat
mdadm --detail /dev/md0


later we go to add your new sdd disk
Jocko
Site Admin - expert
 
Posts: 11576
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 16, 2025 6:38 pm

Hello

Code: Select all
root@LacieNAS:/ # cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb8[0] sde8[3] sdc8[1]
      11712345600 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>
root@LacieNAS:/ #

root@LacieNAS:/ # mdadm --detail /dev/md0
/dev/md0:
        Version : 1.0
  Creation Time : Sat Oct 11 12:54:00 2025
     Raid Level : raid5
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 3904115200 (3723.25 GiB 3997.81 GB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sat Oct 11 12:54:00 2025
          State : active, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : LacieNAS.local:0  (local to host LacieNAS.local)
           UUID : a6a81cbf:e6e4dd12:4e969bc3:4d662a7d
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       56        0      active sync   /dev/sdb8
       1       8       40        1      active sync   /dev/sdc8
       4       0        0        4      removed
       3       8        8        3      active sync   /dev/sde8
root@LacieNAS:/ #


Thank you
boerie
Donator VIP
Donator VIP
 
Posts: 26
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Fri Oct 17, 2025 7:51 am

Hi

So still good

Currently you have a bitmap over your raid which may impact speed performance. mdadm had to create it when you got a faulty disk.
So to remove it do
Code: Select all
mdadm --grow --bitmap=none /dev/md0
check if the line "bitmap: 0/30 pages [0KB], 65536KB chunk" disappeared in the output of mdstat.

Now we go to restore its redundancy :

Note I assume sdd8 is not mounted. If it is the case do
Code: Select all
umount /dev/sdd8
mount
no /dev/sdd8 device should be mounted and listed in mount output

First step is to make a small change on the partition table : change the partition type on sdd8. Follow carefully this sequence :
  1. Open gdisk interface command :
    Code: Select all
    gdisk /dev/sdd
  2. Then make these actions :
    • Select t as command and press Enter
    • Enter fd00 as new partition type and press Enter
    • Enter w and press Enter, then confirm your action (y) until you exit the gdisk command interface
    • Check if your partition command is correct
      Code: Select all
      gdisk -l /dev/sdd
      you should get a line like this for partition 8 with the code FD00 as partition type
      Code: Select all
      Number  Start (sector)    End (sector)  Size       Code  Name
         8            4096      5860533134   2.7 TiB     FD00  Linux RAID

Note I assume md0 is not mounted. If it is the case do
Code: Select all
umount /dev/md0
mount
no /dev/md0 device should be mounted and listed in mount output

Add now sdd8 as raid member of md0
Code: Select all
mdadm /dev/md0 -a /dev/sdd8

Check if synchronization is running to get again the redundancy on md0
Code: Select all
cat /proc/mdstat

Then wait for the synchronization for completing to restart the nas (several hours :tapfoot )

Note : at this step we are nearly to restore fully the firmware features
Jocko
Site Admin - expert
 
Posts: 11576
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Fri Oct 17, 2025 11:39 am

all seems to have worked


Code: Select all
root@LacieNAS:/ # mdadm --grow --bitmap=none /dev/md0
root@LacieNAS:/ # umount /dev/sdd8
umount: /dev/sdd8: not mounted.
root@LacieNAS:/ # mount
/dev/sda2 on / type ext3 (rw,noatime,data=ordered)
none on /proc type proc (rw,noatime)
none on /sys type sysfs (rw,noatime)
none on /dev/pts type devpts (rw,noatime,mode=600)
/dev/sda5 on /rw_fs type ext3 (rw,noatime,data=ordered)
tmpfs on /rw_fs/tmp/usr/var type tmpfs (rw,noatime,size=5000k)
nfsd on /proc/fs/nfsd type nfsd (rw,noatime)
/dev/sda8 on /share/1000 type ext4 (rw,noatime,nodelalloc,data=ordered)
/dev/sda7 on /lacie-boot type ext3 (rw,noatime,data=ordered)
/dev/sda7 on /lib/firmware type ext3 (rw,noatime,data=ordered)
root@LacieNAS:/ # gdisk /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): t
Using 8
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdd.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.
root@LacieNAS:/ # gdisk -1 /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Usage: gdisk [-l] device_file
root@LacieNAS:/ # gdisk -l /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdd: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): B62B8F51-9636-4497-8A82-DA5A8E7F41C6
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5803998 sectors (2.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   8         5804032      7814037134   3.6 TiB     FD00  Linux RAID
root@LacieNAS:/ # umount /dev/md0
umount: /dev/md0: not mounted.
root@LacieNAS:/ # mount
/dev/sda2 on / type ext3 (rw,noatime,data=ordered)
none on /proc type proc (rw,noatime)
none on /sys type sysfs (rw,noatime)
none on /dev/pts type devpts (rw,noatime,mode=600)
/dev/sda5 on /rw_fs type ext3 (rw,noatime,data=ordered)
tmpfs on /rw_fs/tmp/usr/var type tmpfs (rw,noatime,size=5000k)
nfsd on /proc/fs/nfsd type nfsd (rw,noatime)
/dev/sda8 on /share/1000 type ext4 (rw,noatime,nodelalloc,data=ordered)
/dev/sda7 on /lacie-boot type ext3 (rw,noatime,data=ordered)
/dev/sda7 on /lib/firmware type ext3 (rw,noatime,data=ordered)
root@LacieNAS:/ # mdadm /dev/md0 -a /dev/sdd8
mdadm: added /dev/sdd8
root@LacieNAS:/ #  cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdd8[4] sdb8[0] sde8[3] sdc8[1]
      11712345600 blocks super 1.0 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      [>....................]  recovery =  0.0% (225280/3904115200) finish=4331.9min speed=15018K/sec

unused devices: <none>
root@LacieNAS:/ #



should I be able to see rebuild in the webconsole too?

thank you
boerie
Donator VIP
Donator VIP
 
Posts: 26
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Fri Oct 17, 2025 12:27 pm

:) Yes all is good !

You should see recovery progress from the firmware web interface by loading the disk setup menu but currently information are not available because the nas database is not updated (no raid is set here after your problem)
Jocko
Site Admin - expert
 
Posts: 11576
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

PreviousNext

Return to Lacie 5Big Network vs1

Who is online

Users browsing this forum: No registered users and 17 guests