Replaced and initialised HDD. RAID no longer intact

Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 02, 2025 4:10 pm

RAID repotted degraded but intact after disk failure.

Bought a new HDD and initialised via console

After restart, all HDD are present but one has dropped out of the original RAID

All and any advice welcomed

udpated with screenshot - disk d failed and has been replaced. Disk A was never part of RAID. disks B,C &E were part of raid and intact prior to intitalising disk d
You do not have the required permissions to view the files attached to this post.
boerie
Donator VIP
Donator VIP
 
Posts: 10
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Sat Oct 04, 2025 9:57 am

Hi

So your raid looks like a marmalade :shocked

First
Bought a new HDD and initialised via console
This was not the way and it was explained in the help menu :whistle

You have to use console only for the system disk.

So I need to understand what is the current disks environment to be able to help you. Currently you can not restore redundancy and not enough disk to get a functional degraded disk. So please to do no new action, you may lose your data now.

So
- What was the faulty disk ? sdd,...? and you plug the new disk in the same slot ? how did you ensure what it was really the faulty disk ? did you get a notification ?
- did you swap or not some other disks in their slot ?
- post these outputs
Code: Select all
cat /proc/mdstat
mdadm --detail /dev/md0
mdadm --examine /dev/sd[abcde]8
gdisk -l /dev/sdd
Jocko
Site Admin - expert
 
Posts: 11541
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Wed Oct 08, 2025 6:33 am

Hi and thank you

Warning shown in web interface that nas was degraded with no redundancy
SMART could not read/detect sdd
powered down and replaced sdd with identical new drive
no other drives removed or moved
powered up, smart detected new disk
initialised via disk set up menu option

output below

sincere thanks once again

Code: Select all
root@LacieNAS:/ # cat proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>

root@LacieNAS:/ # mdadm --detail /dev/md0
mdadm: cannot open /dev/md0: No such file or directory
root@LacieNAS:/ #

root@LacieNAS:/ # mdadm --examine dev/sd[abcde]8
mdadm: No md superblock detected on dev/sda8.
dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 1a941ae0:d0e32eb5:6d41a0f6:99054107
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Fri Apr 24 11:28:18 2020
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=560 sectors
          State : clean
    Device UUID : 24d65b2d:c9b62a65:63e6d958:3ada4713

    Update Time : Thu Oct  2 11:40:26 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : c6e291b8 - correct
         Events : 854117

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
mdadm: No md superblock detected on dev/sdc8.
mdadm: No md superblock detected on dev/sdd8.
dev/sde8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : 1a941ae0:d0e32eb5:6d41a0f6:99054107
           Name : LacieNAS.local:0  (local to host LacieNAS.local)
  Creation Time : Fri Apr 24 11:28:18 2020
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 7808230704 (3723.25 GiB 3997.81 GB)
     Array Size : 11712345600 (11169.76 GiB 11993.44 GB)
  Used Dev Size : 7808230400 (3723.25 GiB 3997.81 GB)
   Super Offset : 7808230968 sectors
   Unused Space : before=0 sectors, after=560 sectors
          State : clean
    Device UUID : 62a57ca9:ed008745:3762cd17:adba68b6

    Update Time : Thu Oct  2 11:40:26 2025
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 85740765 - correct
         Events : 854117

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
root@LacieNAS:/ #

root@LacieNAS:/ # gdisk -1 /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Usage: gdisk [-l] device_file
root@LacieNAS:/ #
boerie
Donator VIP
Donator VIP
 
Posts: 10
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 09, 2025 12:39 pm

Hi

Maybe you did not paste fully the output because the output for sdc8 is missing :thinking

So can you post
Code: Select all
mdadm --examine /dev/sdc8


About gdisk
Code: Select all
root@LacieNAS:/ # gdisk -1 /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Usage: gdisk [-l] device_file
the option -l is not the number 1 but the letter l (as list).
So post again
Code: Select all
gdisk -l /dev/sdd
and also
Code: Select all
gdisk -l /dev/sdc
(I do not like you did not get an output about sdc8)
Jocko
Site Admin - expert
 
Posts: 11541
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Thu Oct 09, 2025 3:11 pm

apologies

Code: Select all
root@LacieNAS:/ # mdadm --examine /dev/sdc8
mdadm: No md superblock detected on /dev/sdc8.
root@LacieNAS:/ #

root@LacieNAS:/ # gdisk -l /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdd: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): B62B8F51-9636-4497-8A82-DA5A8E7F41C6
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5803998 sectors (2.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   8         5804032      7814037134   3.6 TiB     8300  Linux filesystem
root@LacieNAS:/ #

root@LacieNAS:/ # gdisk -l /dev/sdc
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): C2F1A7D9-B4C8-4E18-8719-31D532CB2990
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 5803998 sectors (2.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   8         5804032      7814037134   3.6 TiB     FD00  Linux RAID

root@LacieNAS:/ #
boerie
Donator VIP
Donator VIP
 
Posts: 10
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Thu Oct 09, 2025 3:55 pm

:disapprove

That does not smell well

Currently you have only 2 clean raid members where as 3 members are required to assemble your RAID.

About sdc8 where no raid superblock is detected, events happened previously ?

About your former sdd disk, do you have still the disk. And you replace it only because it was detected as a faulty disk or because it was dead ?

Currently there are only 2 way to restore your raid but without assurance of success...:
- if only raid superblock is lost on sdc then by rebuild your raid manually with the same parameters as those used to build the original raid
- if the former sdd disk is still enough clean, try to assemble the raid with it...

Please to post
Code: Select all
cat /etc/mdadm.conf
Jocko
Site Admin - expert
 
Posts: 11541
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Replaced and initialised HDD. RAID no longer intact

Postby boerie » Fri Oct 10, 2025 9:32 am

hello again

I still have original disk, was replaced because flagged as faulty, but not dead. I could reinstall

here is the output requested( i have deleted my email address)

thank you once again

root@LacieNAS:/ # cat /etc/mdadm.conf
CREATE owner=root group=root mode=0666 auto=yes metadata=1.0
PROGRAM /usr/bin/mdadm-events
DEVICE /dev/sd* /dev/se*

MAILADDR xxx

ARRAY /dev/md0 metadata=1.0 level=raid5 num-devices=4 UUID=1a941ae0:d0e32eb5:6d41a0f6:99054107
root@LacieNAS:/ #
boerie
Donator VIP
Donator VIP
 
Posts: 10
Joined: Tue Apr 15, 2025 12:29 pm

Re: Replaced and initialised HDD. RAID no longer intact

Postby Jocko » Fri Oct 10, 2025 1:41 pm

So we can try to recreate the superblock on sdc8.

Currently we know sdb (Device Role : Active device 0) is the first member and sde is the last (Device Role : Active device 3) but we don't know what is the order for sdc and sdd
But we can logically believe sdc is the 2d member

So try to run this command :
Code: Select all
mdadm --create /dev/md0 --assume-clean --chunk=512 --level=5 --raid-devices=4 /dev/sdb8 /dev/sdc8 missing /dev/sde8


Then post
Code: Select all
cat /proc/mdstat
mdadm --examine /dev/sd[bcde]8


if raid device is built, you can try to mount it to check if you see your data
Code: Select all
mkdir /tmp/test
mount /dev/md0 /tmp/test

ls -al /tmp/test
here you should see your file shares.

unmount it
Code: Select all
umount /tmp/test


but do not restart your nas because we have to save the new raid settings

Note if the command fails try again by replacing 'missing' by /dev/sdd8
Jocko
Site Admin - expert
 
Posts: 11541
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France


Return to Lacie 5Big Network vs1

Who is online

Users browsing this forum: No registered users and 7 guests