Front light blinking red and status Raid is degraded

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 7:59 am

Things look fine.
The directories and files have reappeared.

Code: Select all
[root@LaCie-5big /]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sdc2[5] sda2[0] sde2[4] sdb2[2] sdd2[3]
      15619969024 blocks super 1.0 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
     
md3 : active raid1 sdc5[4] sda5[0] sdb5[3] sdd5[2] sde5[1]
      255936 blocks [5/5] [UUUUU]
     
md2 : active raid1 sdc9[4] sda9[0] sdb9[3] sdd9[2] sde9[1]
      875456 blocks [5/5] [UUUUU]
     
md1 : active raid1 sdc8[4] sda8[0] sdb8[3] sdd8[2] sde8[1]
      843328 blocks [5/5] [UUUUU]
     
md0 : active raid1 sdc7[4] sde7[3] sdd7[2] sdb7[1] sda7[0]
      16000 blocks [5/5] [UUUUU]
     
unused devices: <none>


Code: Select all
[root@LaCie-5big /]# cat /proc/partitions|grep sdc
   8        0 3907018584 sdc
   8        1       1024 sdc1
   8        2 3904992392 sdc2
   8        3        934 sdc3
   8        4       1024 sdc4
   8        5     256000 sdc5
   8        6       8033 sdc6
   8        7      16065 sdc7
   8        8     843413 sdc8
   8        9     875543 sdc9
   8       10       8033 sdc10


Code: Select all
[root@LaCie-5big /]# mdadm --detail /dev/md4                         
/dev/md4:
        Version : 1.0
  Creation Time : Tue Oct 28 21:06:48 2014
     Raid Level : raid5
     Array Size : 15619969024 (14896.36 GiB 15994.85 GB)
  Used Dev Size : 3904992256 (3724.09 GiB 3998.71 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Tue Aug  8 21:56:09 2023
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : (none):4
           UUID : ad468f58:53571e53:74558f1d:48545ec5
         Events : 2759000

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sda2
       5       8        2        1      active sync   /dev/sdc2
       3       8       66        2      active sync   /dev/sdd2
       2       8       18        3      active sync   /dev/sdb2
       4       8       50        4      active sync   /dev/sde2


to reset the HD and the sdc partition i used the command "parted"

Do you think there is anything else I should do?
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 8:10 am

I was too hasty. :roll:

The directories reappeared, but the main one with all its subfolders is empty....
Where have they gone?

There were two main folders on the NAS, one named Movies and the other Concerts.
Under Movies there were all the subfolders with the letter of the alphabet and inside these the actual movies.
These subfolders are gone....
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: Front light blinking red and status Raid is degraded

Postby Jocko » Wed Aug 09, 2023 10:03 am

Ok some good news...

I do not remember what are they the mount points for md4 with lacie firmware. So post
Code: Select all
mount
and also dmesg (may be there is still on issue with the filesystem xfs on the raid
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Front light blinking red and status Raid is degraded

Postby Jocko » Wed Aug 09, 2023 10:12 am

I found how md4 is mounted.
viewtopic.php?f=26&t=2209&start=120#p18778

So by using cd and ls commands, browse the folders /shares/Share and /shares/Public
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 10:17 am

Code: Select all
[root@LaCie-5big shares]# ls -la   
drwxr-xr-x    1 root     root          4096 Aug  8 00:45 .
drwxr-xr-x    1 root     root          4096 Feb  5  2018 ..
drwxrwxrwx  269 root     root         28672 Aug  8 22:08 Concerti
drwxrwxrwx   10 root     root          4096 Aug  8 22:09 Film
drwxrwxrwx    8 root     root          4096 Aug  3 19:17 Public
drwxrwxrwx    7 root     root          4096 Aug  3 19:17 Share



Code: Select all
[root@LaCie-5big Share]# ls -la 
drwxrwxrwx    7 root     root          4096 Aug  3 19:17 .
drwxr-xr-x    1 root     root          4096 Aug  8 00:45 ..
drwxrwxrwx    2 root     users           21 Jun 12  2016 .AppleDesktop
drwxrwxrwx    2 admin    users         4096 Jan 13  2023 .AppleDouble
drwxrwxrwx    3 root     root            25 Jun 12  2016 .lacie
drwxrwxrwx    3 root     users           25 Jun 12  2016 Network Trash Folder
drwxrwxrwx    4 root     users           38 Mar 25  2021 Temporary Items


Code: Select all
[root@LaCie-5big Public]# ls -la   
drwxrwxrwx    8 root     root          4096 Aug  3 19:17 .
drwxr-xr-x    1 root     root          4096 Aug  8 00:45 ..
drwxrwxrwx    2 root     root            69 Feb 28  2020 .AppleDB
drwxrwxrwx    2 root     nogroup         21 Mar 10  2017 .AppleDesktop
drwxrwxrwx    2 nobody   nogroup         36 Feb 28  2020 .AppleDouble
-rwxrwxrwx    1 admin    users         6148 Feb 28  2020 .DS_Store
drwxrwxrwx    3 root     root            25 Feb 28  2020 .lacie
drwxrwxrwx    3 root     nogroup         25 Mar 10  2017 Network Trash Folder
drwxrwxrwx    3 root     nogroup         25 Mar 10  2017 Temporary Items


Code: Select all
[root@LaCie-5big /]# dmesg
RAID1 conf printout:
[   14.150000]  --- wd:1 rd:5
[   14.150000]  disk 0, wo:0, o:1, dev:sda7
[   14.150000]  disk 1, wo:1, o:1, dev:sdb7
[   14.160000] md: recovery of RAID array md0
[   14.160000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   14.170000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[   14.180000] md: using 128k window, over a total of 16000 blocks.
[   14.180000] md: md0: recovery done.
[   14.270000] RAID1 conf printout:
[   14.270000]  --- wd:2 rd:5
[   14.270000]  disk 0, wo:0, o:1, dev:sda7
[   14.270000]  disk 1, wo:0, o:1, dev:sdb7
[   14.530000] md: bind<sdd7>
[   14.610000] RAID1 conf printout:
[   14.610000]  --- wd:2 rd:5
[   14.610000]  disk 0, wo:0, o:1, dev:sda7
[   14.610000]  disk 1, wo:0, o:1, dev:sdb7
[   14.620000]  disk 2, wo:1, o:1, dev:sdd7
[   14.620000] md: recovery of RAID array md0
[   14.630000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   14.630000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[   14.640000] md: using 128k window, over a total of 16000 blocks.
[   14.980000] md: bind<sde7>
[   17.740000] md: md0: recovery done.
[   17.900000] RAID1 conf printout:
[   17.900000]  --- wd:3 rd:5
[   17.900000]  disk 0, wo:0, o:1, dev:sda7
[   17.900000]  disk 1, wo:0, o:1, dev:sdb7
[   17.910000]  disk 2, wo:0, o:1, dev:sdd7
[   17.970000] RAID1 conf printout:
[   17.970000]  --- wd:3 rd:5
[   17.970000]  disk 0, wo:0, o:1, dev:sda7
[   17.970000]  disk 1, wo:0, o:1, dev:sdb7
[   17.980000]  disk 2, wo:0, o:1, dev:sdd7
[   17.980000]  disk 3, wo:1, o:1, dev:sde7
[   17.990000] md: recovery of RAID array md0
[   17.990000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   18.000000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[   18.010000] md: using 128k window, over a total of 16000 blocks.
[   20.160000] md: md0: recovery done.
[   20.480000] RAID1 conf printout:
[   20.480000]  --- wd:4 rd:5
[   20.480000]  disk 0, wo:0, o:1, dev:sda7
[   20.480000]  disk 1, wo:0, o:1, dev:sdb7
[   20.490000]  disk 2, wo:0, o:1, dev:sdd7
[   20.490000]  disk 3, wo:0, o:1, dev:sde7
[   21.020000] md: md1 stopped.
[   21.030000] md: bind<sde8>
[   21.030000] md: bind<sdd8>
[   21.030000] md: bind<sdb8>
[   21.030000] md: bind<sda8>
[   21.040000] raid1: raid set md1 active with 4 out of 5 mirrors
[   21.050000] md1: detected capacity change from 0 to 863567872
[   21.140000]  md1: unknown partition table
[   21.980000] md: md2 stopped.
[   22.020000] md: bind<sde9>
[   22.020000] md: bind<sdd9>
[   22.030000] md: bind<sdb9>
[   22.030000] md: bind<sda9>
[   22.040000] raid1: raid set md2 active with 4 out of 5 mirrors
[   22.040000] md2: detected capacity change from 0 to 896466944
[   22.140000]  md2: unknown partition table
[   22.980000] md: md3 stopped.
[   22.990000] md: bind<sde5>
[   22.990000] md: bind<sdd5>
[   22.990000] md: bind<sdb5>
[   23.000000] md: bind<sda5>
[   23.000000] raid1: raid set md3 active with 4 out of 5 mirrors
[   23.010000] md3: detected capacity change from 0 to 262078464
[   23.100000]  md3: unknown partition table
[   28.600000] kjournald starting.  Commit interval 5 seconds
[   28.600000] EXT3-fs: mounted filesystem with writeback data mode.
[   28.660000] kjournald starting.  Commit interval 5 seconds
[   28.670000] EXT3 FS on md2, internal journal
[   28.670000] EXT3-fs: mounted filesystem with writeback data mode.
[   28.850000] unionfs: unionfs: new generation number 2
[   28.900000] unionfs: unionfs: new generation number 3
[   29.860000] Adding 255928k swap on /dev/md3.  Priority:-1 extents:1 across:255928k
[   30.210000] usbcore: registered new interface driver usbfs
[   30.210000] usbcore: registered new interface driver hub
[   30.210000] usbcore: registered new device driver usb
[   30.230000] Initializing USB Mass Storage driver...
[   30.230000] usbcore: registered new interface driver usb-storage
[   30.230000] USB Mass Storage support registered.
[   31.120000] udev: starting version 139
[   33.230000] uncorrectable error :
[   33.230000] end_request: I/O error, dev mtdblock0, sector 0
[   33.230000] Buffer I/O error on device mtdblock0, logical block 0
[   33.230000] uncorrectable error :
[   33.230000] end_request: I/O error, dev mtdblock0, sector 8
[   33.230000] Buffer I/O error on device mtdblock0, logical block 1
[   33.230000] end_request: I/O error, dev mtdblock0, sector 16
[   33.230000] Buffer I/O error on device mtdblock0, logical block 2
[   33.230000] uncorrectable error :
[   33.230000] end_request: I/O error, dev mtdblock0, sector 24
[   33.230000] Buffer I/O error on device mtdblock0, logical block 3
[   33.230000] uncorrectable error :
[   33.230000] end_request: I/O error, dev mtdblock0, sector 0
[   33.230000] Buffer I/O error on device mtdblock0, logical block 0
[   44.400000] uncorrectable error :
[   44.400000] end_request: I/O error, dev mtdblock0, sector 0
[   44.620000] md: md4 stopped.
[   44.630000] md: bind<sdc2>
[   44.640000] md: bind<sdd2>
[   44.640000] md: bind<sdb2>
[   44.650000] md: bind<sde2>
[   44.650000] md: bind<sda2>
[   44.650000] md: kicking non-fresh sdc2 from array!
[   44.650000] md: unbind<sdc2>
[   44.650000] md: export_rdev(sdc2)
[   44.760000] raid5: device sda2 operational as raid disk 0
[   44.760000] raid5: device sde2 operational as raid disk 4
[   44.760000] raid5: device sdb2 operational as raid disk 3
[   44.760000] raid5: device sdd2 operational as raid disk 2
[   44.760000] raid5: allocated 5258kB for md4
[   44.760000] raid5: raid level 5 set md4 active with 4 out of 5 devices, algorithm 2
[   44.760000] RAID5 conf printout:
[   44.760000]  --- rd:5 wd:4
[   44.760000]  disk 0, o:1, dev:sda2
[   44.760000]  disk 2, o:1, dev:sdd2
[   44.760000]  disk 3, o:1, dev:sdb2
[   44.760000]  disk 4, o:1, dev:sde2
[   44.760000] md4: detected capacity change from 0 to 15994848280576
[   45.370000]  md4: unknown partition table
[   48.670000] eth1: started
[   62.050000] loop: module loaded
[   66.980000] XFS mounting filesystem md4
[   67.200000] XFS: Invalid block length (0x0) given for buffer
[   67.200000] XFS: Log inconsistent (didn't find previous header)
[   67.200000] XFS: empty log check failed
[   67.200000] XFS: log mount/recovery failed: error 5
[   67.200000] XFS: log mount failed
[  139.620000] iSCSI Enterprise Target Software - version 1.4.19
[  139.620000] iscsi_trgt: Registered io type fileio
[  139.620000] iscsi_trgt: Registered io type blockio
[  139.620000] iscsi_trgt: Registered io type nullio
[  147.220000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[  147.220000] ehci_marvell ehci_marvell.70059: Marvell Orion EHCI
[  147.220000] ehci_marvell ehci_marvell.70059: new USB bus registered, assigned bus number 1
[  147.250000] ehci_marvell ehci_marvell.70059: irq 19, io base 0xf1050100
[  147.270000] ehci_marvell ehci_marvell.70059: USB 2.0 started, EHCI 1.00
[  147.270000] usb usb1: configuration #1 chosen from 1 choice
[  147.270000] hub 1-0:1.0: USB hub found
[  147.270000] hub 1-0:1.0: 1 port detected
[  147.590000] usb 1-1: new high speed USB device using ehci_marvell and address 2
[  147.740000] usb 1-1: configuration #1 chosen from 1 choice
[  147.740000] hub 1-1:1.0: USB hub found
[  147.750000] hub 1-1:1.0: 2 ports detected
[  155.180000] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[  155.180000] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[  155.180000] NFSD: starting 90-second grace period
[  165.130000] warning: `proftpd' uses 32-bit capabilities (legacy support in use)
[  378.020000] eth0: link down
[  380.720000] eth1: link up, full duplex, speed 1 Gbps
[  389.430000] eth1: link down
[  392.140000] eth0: link up, full duplex, speed 1 Gbps
[ 6798.440000] eth0: link down
[12331.940000] eth0: link up, full duplex, speed 1 Gbps
[13424.370000] ata1.01: exception Emask 0x10 SAct 0x0 SErr 0x10002 action 0xf
[13424.370000] ata1.01: SError: { RecovComm PHYRdyChg }
[13424.370000] ata1.01: hard resetting link
[13430.130000] ata1.01: hard resetting link
[13430.480000] ata1.01: limiting SATA link speed to 1.5 Gbps
[13435.480000] ata1.01: hard resetting link
[13435.830000] ata1.01: disabled
[13435.830000] ata1: EH complete
[13435.830000] ata1.01: detaching (SCSI 0:1:0:0)
[13435.830000] sd 0:1:0:0: [sdc] Synchronizing SCSI cache
[13435.850000] sd 0:1:0:0: [sdc] Result: hostbyte=0x04 driverbyte=0x00
[13435.850000] sd 0:1:0:0: [sdc] Stopping disk
[13435.850000] sd 0:1:0:0: [sdc] START_STOP FAILED
[13435.850000] sd 0:1:0:0: [sdc] Result: hostbyte=0x04 driverbyte=0x00
[13459.640000] ata1.01: exception Emask 0x10 SAct 0x0 SErr 0x4050002 action 0xf
[13459.640000] ata1.01: SError: { RecovComm PHYRdyChg CommWake DevExch }
[13459.640000] ata1.01: hard resetting link
[13465.310000] ata1.01: ATA-10: WDC WD40EFAX-68JH4N1, 83.00A83, max UDMA/133
[13465.310000] ata1.01: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32)
[13465.330000] ata1.01: configured for UDMA/133
[13465.330000] ata1: EH complete
[13465.350000] scsi 0:1:0:0: Direct-Access     ATA      WDC WD40EFAX-68J 83.0 PQ: 0 ANSI: 5
[13465.350000] sd 0:1:0:0: Attached scsi generic sg0 type 0
[13465.350000] sd 0:1:0:0: [sdc] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[13465.350000] sd 0:1:0:0: [sdc] 4096-byte physical blocks
[13465.350000] sd 0:1:0:0: [sdc] Write Protect is off
[13465.350000] sd 0:1:0:0: [sdc] Mode Sense: 00 3a 00 00
[13465.350000] sd 0:1:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[13465.350000]  sdc:
[13465.490000] sd 0:1:0:0: [sdc] Attached SCSI disk
[13507.090000] XFS mounting filesystem md4
[13507.400000] Ending clean XFS mount for filesystem: md4
[13507.400000] XFS quotacheck md4: Please wait.
[13514.310000] XFS quotacheck md4: Done.
[13570.990000] md: bind<sdc5>
[13571.440000] RAID1 conf printout:
[13571.440000]  --- wd:4 rd:5
[13571.440000]  disk 0, wo:0, o:1, dev:sda5
[13571.440000]  disk 1, wo:0, o:1, dev:sde5
[13571.440000]  disk 2, wo:0, o:1, dev:sdd5
[13571.440000]  disk 3, wo:0, o:1, dev:sdb5
[13571.440000]  disk 4, wo:1, o:1, dev:sdc5
[13571.460000] md: recovery of RAID array md3
[13571.460000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[13571.460000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[13571.460000] md: using 128k window, over a total of 255936 blocks.
[13586.180000] md: md3: recovery done.
[13586.300000] RAID1 conf printout:
[13586.300000]  --- wd:5 rd:5
[13586.300000]  disk 0, wo:0, o:1, dev:sda5
[13586.300000]  disk 1, wo:0, o:1, dev:sde5
[13586.300000]  disk 2, wo:0, o:1, dev:sdd5
[13586.300000]  disk 3, wo:0, o:1, dev:sdb5
[13586.300000]  disk 4, wo:0, o:1, dev:sdc5
[13595.220000] md: bind<sdc9>
[13595.410000] RAID1 conf printout:
[13595.410000]  --- wd:4 rd:5
[13595.410000]  disk 0, wo:0, o:1, dev:sda9
[13595.410000]  disk 1, wo:0, o:1, dev:sde9
[13595.410000]  disk 2, wo:0, o:1, dev:sdd9
[13595.410000]  disk 3, wo:0, o:1, dev:sdb9
[13595.410000]  disk 4, wo:1, o:1, dev:sdc9
[13595.420000] md: recovery of RAID array md2
[13595.420000] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[13595.420000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[13595.420000] md: using 128k window, over a total of 875456 blocks.
[13605.070000] md: bind<sdc8>
[13605.200000] RAID1 conf printout:
[13605.200000]  --- wd:4 rd:5
[13605.200000]  disk 0, wo:0, o:1, dev:sda8
[13605.200000]  disk 1, wo:0, o:1, dev:sde8
[13605.200000]  disk 2, wo:0, o:1, dev:sdd8
[13605.200000]  disk 3, wo:0, o:1, dev:sdb8
[13605.200000]  disk 4, wo:1, o:1, dev:sdc8
[13605.210000] md: delaying recovery of md1 until md2 has finished (they share one or more physical units)
[13614.290000] md: bind<sdc7>
[13614.440000] RAID1 conf printout:
[13614.440000]  --- wd:4 rd:5
[13614.440000]  disk 0, wo:0, o:1, dev:sda7
[13614.440000]  disk 1, wo:0, o:1, dev:sdb7
[13614.440000]  disk 2, wo:0, o:1, dev:sdd7
[13614.440000]  disk 3, wo:0, o:1, dev:sde7
[13614.440000]  disk 4, wo:1, o:1, dev:sdc7
[13614.440000] md: delaying recovery of md0 until md2 has finished (they share one or more physical units)
[13614.440000] md: delaying recovery of md1 until md2 has finished (they share one or more physical units)
[13624.360000] md: bind<sdc2>
[13624.620000] RAID5 conf printout:
[13624.620000]  --- rd:5 wd:4
[13624.620000]  disk 0, o:1, dev:sda2
[13624.620000]  disk 1, o:1, dev:sdc2
[13624.620000]  disk 2, o:1, dev:sdd2
[13624.620000]  disk 3, o:1, dev:sdb2
[13624.620000]  disk 4, o:1, dev:sde2
[13624.630000] md: delaying recovery of md4 until md2 has finished (they share one or more physical units)
[13624.630000] md: delaying recovery of md0 until md4 has finished (they share one or more physical units)
[13624.640000] md: delaying recovery of md1 until md2 has finished (they share one or more physical units)
[13644.820000] md: md2: recovery done.
[13644.840000] md: recovery of RAID array md1
[13644.840000] md: minimum _guaranteed_  speed: 20000 KB/sec/disk.
[13644.840000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[13644.840000] md: using 128k window, over a total of 843328 blocks.
[13644.840000] md: delaying recovery of md0 until md4 has finished (they share one or more physical units)
[13644.840000] md: delaying recovery of md4 until md1 has finished (they share one or more physical units)
[13645.280000] RAID1 conf printout:
[13645.280000]  --- wd:5 rd:5
[13645.280000]  disk 0, wo:0, o:1, dev:sda9
[13645.280000]  disk 1, wo:0, o:1, dev:sde9
[13645.280000]  disk 2, wo:0, o:1, dev:sdd9
[13645.280000]  disk 3, wo:0, o:1, dev:sdb9
[13645.280000]  disk 4, wo:0, o:1, dev:sdc9
[13662.370000] md: md1: recovery done.
[13662.380000] md: recovery of RAID array md4
[13662.380000] md: minimum _guaranteed_  speed: 20000 KB/sec/disk.
[13662.380000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[13662.380000] md: using 128k window, over a total of 3904992256 blocks.
[13662.390000] md: delaying recovery of md0 until md4 has finished (they share one or more physical units)
[13662.510000] RAID1 conf printout:
[13662.510000]  --- wd:5 rd:5
[13662.510000]  disk 0, wo:0, o:1, dev:sda8
[13662.510000]  disk 1, wo:0, o:1, dev:sde8
[13662.510000]  disk 2, wo:0, o:1, dev:sdd8
[13662.510000]  disk 3, wo:0, o:1, dev:sdb8
[13662.510000]  disk 4, wo:0, o:1, dev:sdc8
[89750.970000] md: md4: recovery done.
[89751.030000] md: recovery of RAID array md0
[89751.030000] md: minimum _guaranteed_  speed: 20000 KB/sec/disk.
[89751.030000] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
[89751.030000] md: using 128k window, over a total of 16000 blocks.
[89751.210000] RAID5 conf printout:
[89751.210000]  --- rd:5 wd:5
[89751.210000]  disk 0, o:1, dev:sda2
[89751.210000]  disk 1, o:1, dev:sdc2
[89751.210000]  disk 2, o:1, dev:sdd2
[89751.210000]  disk 3, o:1, dev:sdb2
[89751.210000]  disk 4, o:1, dev:sde2
[89754.200000] md: md0: recovery done.
[89754.480000] RAID1 conf printout:
[89754.480000]  --- wd:5 rd:5
[89754.480000]  disk 0, wo:0, o:1, dev:sda7
[89754.480000]  disk 1, wo:0, o:1, dev:sdb7
[89754.480000]  disk 2, wo:0, o:1, dev:sdd7
[89754.480000]  disk 3, wo:0, o:1, dev:sde7
[89754.480000]  disk 4, wo:0, o:1, dev:sdc7
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: Front light blinking red and status Raid is degraded

Postby Jocko » Wed Aug 09, 2023 11:12 am

and then in the main folders Film and Concerti, do you see your files ?
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 12:13 pm

Jocko wrote:and then in the main folders Film and Concerti, do you see your files ?


No, just in Concerti i see the subdirectory and content, in Film this is the situation

Code: Select all
[root@LaCie-5big Film]# ls -la
drwxrwxrwx   10 root     root          4096 Aug  8 22:09 .
drwxr-xr-x    1 root     root          4096 Aug  8 00:45 ..
drwxrwxrwx    4 admin    users         4096 Sep 23  2016 .@__thumb
drwxrwxrwx    2 root     root          4096 Aug  8 22:09 .AppleDB
drwxrwxrwx    2 root     users           21 Aug  8 22:08 .AppleDesktop
drwxrwxrwx    2 admin    users           36 Aug  8 22:09 .AppleDouble
-rwxrwxrwx    1 admin    users         6148 Aug  8 22:09 .DS_Store
drwxrwsrwx    3 Domenico users           25 Aug 18  2016 .TemporaryItems
drwxrwxrwx    3 root     root            25 Aug  8 22:10 .lacie
drwxrwxrwx    3 root     users           25 Aug  8 22:08 Network Trash Folder
drwxrwxrwx    3 root     users           25 Aug  8 22:08 Temporary Items


As mentioned, there used to be at least 26 subfolders here which are now gone.
Obviously I don't know exactly if under Concerts there are all the previous data, but at least in large part
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: Front light blinking red and status Raid is degraded

Postby Jocko » Wed Aug 09, 2023 12:32 pm

Ok

:roll: I am afraid of you lost definitively your data in Film. When lacie firmware needs to do a recovery your data are moved in a sub folder "recovery" in Private share which is not present
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 12:44 pm

Jocko wrote:Ok

:roll: I am afraid of you lost definitively your data in Film. When lacie firmware needs to do a recovery your data are moved in a sub folder "recovery" in Private share which is not present


I think so, but maybe not.
The Raid capacity is 15.99 TB, the data inside this is 15.74 TB with a free space of 247GB.

Concerti and its subfolders occupy 2.7TB, the rest is from the FIlm folder and are still on the NAS.
Of course they will have to be rebuilt and I don't know if it's possible.

Thanks a lot for you help, really appreciated-
i will try to do something and update here if i can.
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

Re: Front light blinking red and status Raid is degraded

Postby Glaven Clattuck » Wed Aug 09, 2023 3:08 pm

A small step forward.
I used this command
Code: Select all
ls -R > filename1

to know all the files and directories contained in the NAS.
The result is a file of over 262,000 lines.
Searching in here I found my films.
The majority are under this path:

./media/internal_1/lost+found/

Is there any way to rebuild the old paths? or it remains only to move the files one by one...
Glaven Clattuck
Donator VIP
Donator VIP
 
Posts: 152
Joined: Sat May 21, 2016 3:21 pm
Location: Urbe Immortalis

PreviousNext

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 2 guests

cron