Raid clean, degraded completed, estimated time ...

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Fri May 17, 2024 3:14 pm

Hello guys,
I dont want to create a new topic when is not needed.

Just right now i received an error message from my LACIE bellow. Also I can seen that the LEDs are blinking RED.

Could you please help me ? Thank you in advance !

This is an automatically generated mail message from mdadm
running on kirkwood-4.14.133

A DegradedArray event had been detected on md device /dev/md0.


P.S. The /proc/mdstat file currently contains the following:

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sdb8[1] sde8[4] sdd8[3] sdc8[2]
7800905728 blocks super 1.0 level 5, 512k chunk, algorithm 2 [5/4] [_UUUU]
bitmap: 4/15 pages [16KB], 65536KB chunk

Martin
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby Jocko » Sat May 18, 2024 9:40 am

Hi

That means you have a faulty disk with your raid. it should be sda.
You can confirm that with
Code: Select all
mdadm --detail /dev/md0
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Sun May 19, 2024 8:34 am

Hi Jocko,

Thank you for your reply, Unfortunately something is really f....... Today I want to start NAS again and investigate whats going on but I cannost reach it within the network aswell as direct via LAN. Any suggestion please?

Thank you :(
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Tue May 21, 2024 5:28 pm

Hi Jocko,
I was able to boot standalone the file kirkwood 171 and logon but unfortunately the command you provide not working. Is it possible to restore the FVDW firmware without data loss ?
Thanks a lot !
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby fvdw » Wed May 22, 2024 7:21 am

If you use the standalone kernel and want to use mdadm command you need to run fvdw-sl-programs
Code: Select all
/fvdw-sl-programs

and upload the extra package mentioned in de list (option 4 upload and extract glibc mini and tools) that will enable you to use mdadm
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Thu May 23, 2024 6:58 am

Hello,
Thank you for your help, i was able to run the script but unfortunately its says: md0 does not appear to be active :(
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Thu May 23, 2024 3:52 pm

Hello,

So i tryied the scripts which Jocko posted for Draftmancorp and here is output, Its looks like really sda disk which is the first one ist ..... dead :(

Can you please help me to rebuild the RAID 5 ? I was looking for procedure but with not luck.

Also the question about the failed HDD. Should I buy the same or it doesnt matter ? I found ST2000NT001 also Seagate 2TB is it a good option ?

Thank you !!!! :)

Here is output:

Code: Select all
root@fvdw-sta-kirkwood:/ # mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # cat /proc/partitions|grep sd
   8        0 1953514584 sdc
   8        4          1 sdc4
   8        5     835380 sdc5
   8        6      64260 sdc6
   8        7     514080 sdc7
   8        8 1950226740 sdc8
   8       16 1953514584 sdb
   8       20          1 sdb4
   8       21     835380 sdb5
   8       22      64260 sdb6
   8       23     514080 sdb7
   8       24 1950226740 sdb8
   8       32 1953514584 sde
   8       36          1 sde4
   8       37     835380 sde5
   8       38      64260 sde6
   8       39     514080 sde7
   8       40 1950226740 sde8
   8       48 1953514584 sdd
   8       52          1 sdd4
   8       53     835380 sdd5
   8       54      64260 sdd6
   8       55     514080 sdd7
   8       56 1950226740 sdd8

____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # gdisk -l /dev/sde
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format.
***************************************************************

Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 3748181 sectors (1.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   5         3341520         5012279   815.8 MiB   8300  Linux filesystem
   6         5140800         5269319   62.8 MiB    8300  Linux filesystem
   7         5397840         6425999   502.0 MiB   8300  Linux filesystem
   8         6554520      3907007999   1.8 TiB     FD00  Linux RAID
root@fvdw-sta-kirkwood:/ # cat /proc/partitions|grep sd
   8        0 1953514584 sdc
   8        4          1 sdc4
   8        5     835380 sdc5
   8        6      64260 sdc6
   8        7     514080 sdc7
   8        8 1950226740 sdc8
   8       16 1953514584 sdb
   8       20          1 sdb4
   8       21     835380 sdb5
   8       22      64260 sdb6
   8       23     514080 sdb7
   8       24 1950226740 sdb8
   8       32 1953514584 sde
   8       36          1 sde4
   8       37     835380 sde5
   8       38      64260 sde6
   8       39     514080 sde7
   8       40 1950226740 sde8
   8       48 1953514584 sdd
   8       52          1 sdd4
   8       53     835380 sdd5
   8       54      64260 sdd6
   8       55     514080 sdd7
   8       56 1950226740 sdd8
____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
unused devices: <none>
____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # mdadm --detail --scan
____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
____________________________________________________________________________

root@fvdw-sta-kirkwood:/ # mdadm --examine /dev/sd[abcd]8
/dev/sda8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 30b66c2b:dc82e187:71c77728:dd40f1de
           Name : fvdwsl-base.local:0
  Creation Time : Fri Feb  9 20:49:50 2024
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
          State : clean
    Device UUID : 59b15003:60fcd15d:757a4fb7:5291f92c

Internal Bitmap : -16 sectors from superblock
    Update Time : Fri May 17 15:08:56 2024
       Checksum : 9867e5a3 - correct
         Events : 149

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdb8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 30b66c2b:dc82e187:71c77728:dd40f1de
           Name : fvdwsl-base.local:0
  Creation Time : Fri Feb  9 20:49:50 2024
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
          State : clean
    Device UUID : 25376e8d:8cee0b1a:8daee430:9406d2d3

Internal Bitmap : -16 sectors from superblock
    Update Time : Fri May 17 15:08:56 2024
       Checksum : ff2d06f4 - correct
         Events : 149

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdc8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 30b66c2b:dc82e187:71c77728:dd40f1de
           Name : fvdwsl-base.local:0
  Creation Time : Fri Feb  9 20:49:50 2024
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
          State : clean
    Device UUID : 59b15003:60fcd15d:757a4fb7:5291f92c

Internal Bitmap : -16 sectors from superblock
    Update Time : Fri May 17 15:08:56 2024
       Checksum : 9867e5a3 - correct
         Events : 149

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdd8:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x1
     Array UUID : 30b66c2b:dc82e187:71c77728:dd40f1de
           Name : fvdwsl-base.local:0
  Creation Time : Fri Feb  9 20:49:50 2024
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3900453200 (1859.88 GiB 1997.03 GB)
     Array Size : 7800905728 (7439.52 GiB 7988.13 GB)
  Used Dev Size : 3900452864 (1859.88 GiB 1997.03 GB)
   Super Offset : 3900453464 sectors
          State : clean
    Device UUID : f691b2e0:e7ff9f29:76c29721:40682d2e

Internal Bitmap : -16 sectors from superblock
    Update Time : Fri May 17 15:08:56 2024
       Checksum : ad13e8b7 - correct
         Events : 149

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : .AAAA ('A' == active, '.' == missing)

____________________________________________________________________________
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby fvdw » Thu May 23, 2024 8:36 pm

Before jumping to conclusions lets first analyse sda. As partition 8 on sda still seems to be present if you can get firmware running maybe the array can be made active.

When running standalone kernel,
Check partition table:
Code: Select all
fdisk -l /dev/sda

Or when you have used gpt table
Code: Select all
gdisk -l /dev/sda

If it can read the partition table then try to mount sda7
Code: Select all
mkdir / sda7
mount /dev/sda7 / sda7

If that succeeds list the boot log file on that partition
Code: Select all
cat /sda7/boot.log

From that we can see if firmware is loaded from sda1 or sda2
If that is the case mount then either sda1 or sda2 in same way as sda7 was mounted and list the boot log of that partition.
Maybe it will show why firmware loading is failing
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Raid clean, degraded completed, estimated time ...

Postby razzor0 » Fri May 24, 2024 5:50 am

Hello FVDW
Thank you for your reply, I run the scripts you provided here is output.

Code: Select all
root@fvdw-sta-kirkwood:/ # fdisk -l /dev/sda

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda4             201      243200  1951897500   5 Extended
/dev/sda5             209         312      835380  83 Linux
/dev/sda6             321         328       64260  83 Linux
/dev/sda7             337         400      514080  83 Linux
/dev/sda8             409      243200  1950226740  fd Linux raid autodetect
_____________________________________________________________________

root@fvdw-sta-kirkwood:/ # gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: MBR only
  BSD: not present
  APM: not present
  GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format.
***************************************************************

Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 8-sector boundaries
Total free space is 3748181 sectors (1.8 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   5         3341520         5012279   815.8 MiB   8300  Linux filesystem
   6         5140800         5269319   62.8 MiB    8300  Linux filesystem
   7         5397840         6425999   502.0 MiB   8300  Linux filesystem
   8         6554520      3907007999   1.8 TiB     FD00  Linux RAID

_____________________________________________________________________________________

root@fvdw-sta-kirkwood:/ # mkdir /sda7
mkdir: can't create directory '/sda7': File exists
___________________________________________________________________

root@fvdw-sta-kirkwood:/ # mount /dev/sda7 / sda7
Usage: mount -V                 : print version
       mount -h                 : print this help
       mount                    : list mounted filesystems
       mount -l                 : idem, including volume labels
So far the informational part. Next the mounting.
The command is `mount [-t fstype] something somewhere'.
Details found in /etc/fstab may be omitted.
       mount -a [-t|-O] ...     : mount all stuff from /etc/fstab
       mount device             : mount device at the known place
       mount directory          : mount known device here
       mount -t type dev dir    : ordinary mount command
Note that one does not really mount a device, one mounts
a filesystem (of the given type) found on the device.
One can also mount an already visible directory tree elsewhere:
       mount --bind olddir newdir
or move a subtree:
       mount --move olddir newdir
One can change the type of mount containing the directory dir:
       mount --make-shared dir
       mount --make-slave dir
       mount --make-private dir
       mount --make-unbindable dir
One can change the type of all the mounts in a mount subtree
containing the directory dir:
       mount --make-rshared dir
       mount --make-rslave dir
       mount --make-rprivate dir
       mount --make-runbindable dir
A device can be given by name, say /dev/hda1 or /dev/cdrom,
or by label, using  -L label  or by uuid, using  -U uuid .
Other options: [-nfFrsvw] [-o options] [-p passwdfd].
For many more details, say  man 8 mount .
___________________________________________________________________

root@fvdw-sta-kirkwood:/ # cat /sda7/boot.log
cat: can't open '/sda7/boot.log': No such file or directory
razzor0
Donator VIP
Donator VIP
 
Posts: 55
Joined: Sun Feb 04, 2024 11:02 am

Re: Raid clean, degraded completed, estimated time ...

Postby fvdw » Fri May 24, 2024 8:51 am

ok, sorry there is a type error, myfault, in the mount command it should be

Code: Select all
mount /dev/sda7 /sda7

(without the space in front of the second sda7)

please try again.

ps it is strange that on sda partition 1-3 are missing :scratch These are the partitions were the firmware is present and loaded.

Can you also post out put of
Code: Select all
dmesg
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: nos96 and 9 guests