Looking for a *little* ;) help please with Lacie 5big2:
CPU: Feroceon 88FR131 [56251311] revision 1 (ARMv5TE), cr=00053177
CPU: VIVT data cache, VIVT instruction cache
Machine: LaCie 5Big Network v2
Ignoring unrecognised tag 0x41000403
History:
1) 5* 2 Terabyte disks in Raid 6.
2) Asked the Lacie to delete a 1.5 terabyte Timemachine file (OK, I know, a mistake) after that
3) No more shares and Lacie declares the NAS is part full :(
4) Discover plugout.net & fvdw :woohoo now running UIMAGE-395-NWSP2CL-179-standalone
5) Problem 1: All disks visible as SCSI, but /dev/sdd apparently not accessible although /dev/sdd2 will join the Raid array. I can see no difference in the status as the disks are being identified during boot. Not sure if this is a contributor to...
6) Problem 2:
root@(none):/ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: /dev/md4 has been started with 5 drives.
root@(none):/ # mkdir /mountpoint
root@(none):/ # mount -o ro /dev/md4 /mountpoint
mount: mounting /dev/md4 on /mountpoint failed: Input/output error
The corresponding dmesg output is:
[ 182.846541] md: md4 stopped.
[ 182.853159] md: bind<sdb2>
[ 182.856296] md: bind<sdc2>
[ 182.859384] md: bind<sdd2>
[ 182.862542] md: bind<sde2>
[ 182.865627] md: bind<sda2>
[ 182.869350] md/raid:md4: device sda2 operational as raid disk 0
[ 182.875357] md/raid:md4: device sde2 operational as raid disk 4
[ 182.881319] md/raid:md4: device sdd2 operational as raid disk 3
[ 182.887233] md/raid:md4: device sdc2 operational as raid disk 2
[ 182.893256] md/raid:md4: device sdb2 operational as raid disk 1
[ 182.900534] md/raid:md4: allocated 5282kB
[ 182.904618] md/raid:md4: raid level 6 active with 5 out of 5 devices, algorithm 2
[ 182.912127] RAID conf printout:
[ 182.912137] --- level:6 rd:5 wd:5
[ 182.912146] disk 0, o:1, dev:sda2
[ 182.912154] disk 1, o:1, dev:sdb2
[ 182.912162] disk 2, o:1, dev:sdc2
[ 182.912170] disk 3, o:1, dev:sdd2
[ 182.912178] disk 4, o:1, dev:sde2
[ 182.912322] md4: detected capacity change from 0 to 5995018321920
[ 217.268855] md4: unknown partition table
[ 217.286822] grow_buffers: requested out-of-range block 18446744072533669887 for device md4
[ 217.295128] UDF-fs: error (device md4): udf_read_tagged: read failed, block=3119085567, location=-1175881729
[ 217.304967] grow_buffers: requested out-of-range block 18446744072533669631 for device md4
[ 217.313241] UDF-fs: error (device md4): udf_read_tagged: read failed, block=3119085311, location=-1175881985
[ 217.323062] grow_buffers: requested out-of-range block 18446744072533669886 for device md4
......more of the same
[ 218.127478] grow_buffers: requested out-of-range block 18446744072341838951 for device md4
[ 218.135736] UDF-fs: error (device md4): udf_read_tagged: read failed, block=2927254631, location=-1367712665
[ 218.225637] UDF-fs: warning (device md4): udf_fill_super: No partition found (1)
[ 218.245752] XFS (md4): Mounting Filesystem
[ 218.617470] XFS (md4): Starting recovery (logdev: internal)
[ 218.672816] XFS (md4): xlog_recover_process_data: bad clientid 0x0
[ 218.679024] XFS (md4): log mount/recovery failed: error 5
[ 218.684634] XFS (md4): log mount failed
The disks have passed a surface scan, so should be all readable and I admit to being puzzled as to how a raid6 could lose a partition table when mdadm shows all disks as clean (just showing the first entry, but the rest are clean too):
mdadm --examine /dev/sd[abcde]2
/dev/sda2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 42fe1f17:a5bc4bc3:a730ad4c:3ccd2e22
Name : Lacie-1:4
Creation Time : Wed Apr 2 17:44:08 2014
Raid Level : raid6
Raid Devices : 5
Avail Dev Size : 3903007536 (1861.10 GiB 1998.34 GB)
Array Size : 5854510080 (5583.30 GiB 5995.02 GB)
Used Dev Size : 3903006720 (1861.10 GiB 1998.34 GB)
Super Offset : 3903007792 sectors
State : clean
Device UUID : d7c5e7a3:e09712d1:0c69fcd6:4b9a9393
Update Time : Sun Feb 1 11:15:55 2015
Checksum : d6ee52b1 - correct
Events : 2326373
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing
I intend to fire up the RaspberryPi version of FVDW as I regard Lacie software can be described diplomatically as "fragile", but I would like to try and get my data back and understand what happened in there! Due to a data loss episode 3 years ago which saw me arriving at Lacie Paris and having a heated discussion with one of the team members there I have no irreplaceable data on the Lacie.
Any suggestions to the best way forward?
Thanks
Peter