Lacie 5big Network 2 - Raid Array Missing

Lacie 5big Network 2 - Raid Array Missing

Postby pfwaaa » Sun Feb 01, 2015 9:08 pm

Looking for a *little* ;) help please with Lacie 5big2:
CPU: Feroceon 88FR131 [56251311] revision 1 (ARMv5TE), cr=00053177
CPU: VIVT data cache, VIVT instruction cache
Machine: LaCie 5Big Network v2
Ignoring unrecognised tag 0x41000403

History:
1) 5* 2 Terabyte disks in Raid 6.
2) Asked the Lacie to delete a 1.5 terabyte Timemachine file (OK, I know, a mistake) after that
3) No more shares and Lacie declares the NAS is part full :(
4) Discover plugout.net & fvdw :woohoo now running UIMAGE-395-NWSP2CL-179-standalone
5) Problem 1: All disks visible as SCSI, but /dev/sdd apparently not accessible although /dev/sdd2 will join the Raid array. I can see no difference in the status as the disks are being identified during boot. Not sure if this is a contributor to...
6) Problem 2:
root@(none):/ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: /dev/md4 has been started with 5 drives.
root@(none):/ # mkdir /mountpoint
root@(none):/ # mount -o ro /dev/md4 /mountpoint
mount: mounting /dev/md4 on /mountpoint failed: Input/output error
The corresponding dmesg output is:
[ 182.846541] md: md4 stopped.
[ 182.853159] md: bind<sdb2>
[ 182.856296] md: bind<sdc2>
[ 182.859384] md: bind<sdd2>
[ 182.862542] md: bind<sde2>
[ 182.865627] md: bind<sda2>
[ 182.869350] md/raid:md4: device sda2 operational as raid disk 0
[ 182.875357] md/raid:md4: device sde2 operational as raid disk 4
[ 182.881319] md/raid:md4: device sdd2 operational as raid disk 3
[ 182.887233] md/raid:md4: device sdc2 operational as raid disk 2
[ 182.893256] md/raid:md4: device sdb2 operational as raid disk 1
[ 182.900534] md/raid:md4: allocated 5282kB
[ 182.904618] md/raid:md4: raid level 6 active with 5 out of 5 devices, algorithm 2
[ 182.912127] RAID conf printout:
[ 182.912137] --- level:6 rd:5 wd:5
[ 182.912146] disk 0, o:1, dev:sda2
[ 182.912154] disk 1, o:1, dev:sdb2
[ 182.912162] disk 2, o:1, dev:sdc2
[ 182.912170] disk 3, o:1, dev:sdd2
[ 182.912178] disk 4, o:1, dev:sde2
[ 182.912322] md4: detected capacity change from 0 to 5995018321920
[ 217.268855] md4: unknown partition table
[ 217.286822] grow_buffers: requested out-of-range block 18446744072533669887 for device md4
[ 217.295128] UDF-fs: error (device md4): udf_read_tagged: read failed, block=3119085567, location=-1175881729
[ 217.304967] grow_buffers: requested out-of-range block 18446744072533669631 for device md4
[ 217.313241] UDF-fs: error (device md4): udf_read_tagged: read failed, block=3119085311, location=-1175881985
[ 217.323062] grow_buffers: requested out-of-range block 18446744072533669886 for device md4
......more of the same
[ 218.127478] grow_buffers: requested out-of-range block 18446744072341838951 for device md4
[ 218.135736] UDF-fs: error (device md4): udf_read_tagged: read failed, block=2927254631, location=-1367712665
[ 218.225637] UDF-fs: warning (device md4): udf_fill_super: No partition found (1)
[ 218.245752] XFS (md4): Mounting Filesystem
[ 218.617470] XFS (md4): Starting recovery (logdev: internal)
[ 218.672816] XFS (md4): xlog_recover_process_data: bad clientid 0x0
[ 218.679024] XFS (md4): log mount/recovery failed: error 5
[ 218.684634] XFS (md4): log mount failed

The disks have passed a surface scan, so should be all readable and I admit to being puzzled as to how a raid6 could lose a partition table when mdadm shows all disks as clean (just showing the first entry, but the rest are clean too):
mdadm --examine /dev/sd[abcde]2
/dev/sda2:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 42fe1f17:a5bc4bc3:a730ad4c:3ccd2e22
Name : Lacie-1:4
Creation Time : Wed Apr 2 17:44:08 2014
Raid Level : raid6
Raid Devices : 5

Avail Dev Size : 3903007536 (1861.10 GiB 1998.34 GB)
Array Size : 5854510080 (5583.30 GiB 5995.02 GB)
Used Dev Size : 3903006720 (1861.10 GiB 1998.34 GB)
Super Offset : 3903007792 sectors
State : clean
Device UUID : d7c5e7a3:e09712d1:0c69fcd6:4b9a9393

Update Time : Sun Feb 1 11:15:55 2015
Checksum : d6ee52b1 - correct
Events : 2326373

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AAAAA ('A' == active, '.' == missing

I intend to fire up the RaspberryPi version of FVDW as I regard Lacie software can be described diplomatically as "fragile", but I would like to try and get my data back and understand what happened in there! Due to a data loss episode 3 years ago which saw me arriving at Lacie Paris and having a heated discussion with one of the team members there I have no irreplaceable data on the Lacie.

Any suggestions to the best way forward?

Thanks

Peter
pfwaaa
 
Posts: 10
Joined: Sun Jan 18, 2015 5:57 pm

Re: Lacie 5big Network 2 - Raid Array Missing

Postby fvdw » Sun Feb 01, 2015 9:49 pm

Jocko is the raid expert around here, but he is not around this evening

maybe some suggestion to gather some more information
what does this command give as output
Code: Select all
mdadm --query --detail /dev/md4

It should also list the partition table as far as I know

Further you could check all disk if they contain a partion table and that the sdx2 partitions are of type raid
Code: Select all
fdisk -l


what is the file system used for raid array?, so to see XFS. If it is not ext3 or xfs then the standalone kernel may fail to mount it because it doesn't has support for the file system

from waht I read on the internet it seems the messag md4: unknown partition table is normal. The real partition table are on the individual disks.

I think that more likely you have a damged XFS file system
fvdw
Site Admin - expert
 
Posts: 13245
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Lacie 5big Network 2 - Raid Array Missing

Postby pfwaaa » Mon Feb 02, 2015 7:05 am

fvdw,
Thanks for the reply. Not sure what file system the standard Lacie software uses. The standalone kernel suggests xfs.

root@(none):/ # mdadm --assemble /dev/md4 /dev/sd[abcde]2
mdadm: /dev/md4 has been started with 5 drives.
root@(none):/ # mdadm --query --detail /dev/md4
/dev/md4:
Version : 1.0
Creation Time : Wed Apr 2 17:44:08 2014
Raid Level : raid6
Array Size : 5854510080 (5583.30 GiB 5995.02 GB)
Used Dev Size : 1951503360 (1861.10 GiB 1998.34 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Sun Feb 1 11:15:55 2015
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : Lacie-1:4
UUID : 42fe1f17:a5bc4bc3:a730ad4c:3ccd2e22
Events : 2326373

Number Major Minor RaidDevice State
0 8 34 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 2 2 active sync /dev/sdc2
3 8 66 3 active sync /dev/sdd2
4 8 50 4 active sync /dev/sde2

fdisk -l shows /dev/sd[abce]2 251 243201 1951503907+ 83 Linux -

which is a standard partition. As I mentioned, for some reason /dev/sdd isn't there, though its scsi is (!?).

The log shows:

[ 218.617470] XFS (md4): Starting recovery (logdev: internal)
[ 218.672816] XFS (md4): xlog_recover_process_data: bad clientid 0x0
[ 218.679024] XFS (md4): log mount/recovery failed: error 5

so it seems to be trying to recover then gives up :-(
Peter
pfwaaa
 
Posts: 10
Joined: Sun Jan 18, 2015 5:57 pm

Re: Lacie 5big Network 2 - Raid Array Missing

Postby Jocko » Mon Feb 02, 2015 9:25 am

Hi pfwaaa,

I think that your raid is clean
Code: Select all
[ 182.846541] md: md4 stopped.
[ 182.853159] md: bind<sdb2>
[ 182.856296] md: bind<sdc2>
[ 182.859384] md: bind<sdd2>
[ 182.862542] md: bind<sde2>
[ 182.865627] md: bind<sda2>
[ 182.869350] md/raid:md4: device sda2 operational as raid disk 0
[ 182.875357] md/raid:md4: device sde2 operational as raid disk 4
[ 182.881319] md/raid:md4: device sdd2 operational as raid disk 3
[ 182.887233] md/raid:md4: device sdc2 operational as raid disk 2
[ 182.893256] md/raid:md4: device sdb2 operational as raid disk 1
[ 182.900534] md/raid:md4: allocated 5282kB
[ 182.904618] md/raid:md4: raid level 6 active with 5 out of 5 devices, algorithm 2
[ 182.912127] RAID conf printout:
[ 182.912137] --- level:6 rd:5 wd:5
[ 182.912146] disk 0, o:1, dev:sda2
[ 182.912154] disk 1, o:1, dev:sdb2
[ 182.912162] disk 2, o:1, dev:sdc2
[ 182.912170] disk 3, o:1, dev:sdd2
[ 182.912178] disk 4, o:1, dev:sde2
[ 182.912322] md4: detected capacity change from 0 to 5995018321920
but your have an issue with its filesystem
Code: Select all
[ 217.268855] md4: unknown partition table

It is why you can not mount it
Code: Select all
root@(none):/ # mount -o ro /dev/md4 /mountpoint
mount: mounting /dev/md4 on /mountpoint failed: Input/output error


We need to repair the filesystem on the raid. So could you post the output
Code: Select all
file -s /dev/md4
after having assembled it.
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Lacie 5big Network 2 - Raid Array Missing

Postby pfwaaa » Mon Feb 02, 2015 6:52 pm

Jocko,
I don't think the file command is built into busybox. Is there a standalone somewhere, or am I doing something wrong?

Thanks

Peter
pfwaaa
 
Posts: 10
Joined: Sun Jan 18, 2015 5:57 pm

Re: Lacie 5big Network 2 - Raid Array Missing

Postby Jocko » Mon Feb 02, 2015 7:17 pm

You are right :pound

But if I remember well and fs is xfs, I believe that fvdw had compiled xfs_repair a few month ago.

I go to search it
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Lacie 5big Network 2 - Raid Array Missing

Postby Jocko » Mon Feb 02, 2015 7:29 pm

found ! here you can download a static xfs_repair binary : viewtopic.php?f=26&t=2003&hilit=xfs_repair

So unzip the attached file in the same folder than this one where you put the standalone kernel files and upload it with the tftp command
Code: Select all
tftp -l xfs_repair -r xfs_repair -g pc-ip
(pc_ip is your ip of your laptop) so the command uploads the binary in the current directory.

set run permissions
Code: Select all
chmod 755 xfs_repair


So do
Code: Select all
 xfs_repair /dev/md4
(of course you need to re-assemble the raid device before running this commandline)
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Lacie 5big Network 2 - Raid Array Missing

Postby fvdw » Mon Feb 02, 2015 8:20 pm

read carefully the link provided, to use xfs_repair you also need to install the mini-glibc pacakage !
fvdw
Site Admin - expert
 
Posts: 13245
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Lacie 5big Network 2 - Raid Array Missing

Postby pfwaaa » Mon Feb 02, 2015 8:46 pm

Not sure that went well :(
root@(none):/ # cd /sbin
root@(none):/sbin # xfs_repair /dev/md4
Phase 1 - find and verify superblock...
Killed

Dmesg output:
[ 533.101073] md: md4 stopped.
[ 533.107707] md: bind<sdb2>
[ 533.110817] md: bind<sdc2>
[ 533.113972] md: bind<sdd2>
[ 533.117070] md: bind<sde2>
[ 533.120161] md: bind<sda2>
[ 533.123970] md/raid:md4: device sda2 operational as raid disk 0
[ 533.129893] md/raid:md4: device sde2 operational as raid disk 4
[ 533.135884] md/raid:md4: device sdd2 operational as raid disk 3
[ 533.141801] md/raid:md4: device sdc2 operational as raid disk 2
[ 533.147748] md/raid:md4: device sdb2 operational as raid disk 1
[ 533.155011] md/raid:md4: allocated 5282kB
[ 533.159178] md/raid:md4: raid level 6 active with 5 out of 5 devices, algorithm 2
[ 533.166698] RAID conf printout:
[ 533.166709] --- level:6 rd:5 wd:5
[ 533.166718] disk 0, o:1, dev:sda2
[ 533.166726] disk 1, o:1, dev:sdb2
[ 533.166734] disk 2, o:1, dev:sdc2
[ 533.166742] disk 3, o:1, dev:sdd2
[ 533.166750] disk 4, o:1, dev:sde2
[ 533.166894] md4: detected capacity change from 0 to 5995018321920
[ 1428.703074] md4: unknown partition table
[ 2115.333865] xfs_repair invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=0
[ 2115.341730] [<c001852c>] (unwind_backtrace+0x0/0xe4) from [<c00670d0>] (dump_header.clone.16+0x6c/0x194)
[ 2115.352285] [<c00670d0>] (dump_header.clone.16+0x6c/0x194) from [<c0067424>] (oom_kill_process+0xa4/0x3e0)
[ 2115.361930] [<c0067424>] (oom_kill_process+0xa4/0x3e0) from [<c0067bd4>] (out_of_memory+0x288/0x2dc)
[ 2115.371090] [<c0067bd4>] (out_of_memory+0x288/0x2dc) from [<c006afc8>] (__alloc_pages_nodemask+0x518/0x5f4)
[ 2115.381248] [<c006afc8>] (__alloc_pages_nodemask+0x518/0x5f4) from [<c007f24c>] (handle_pte_fault+0x12c/0x6c4)
[ 2115.391265] [<c007f24c>] (handle_pte_fault+0x12c/0x6c4) from [<c007f890>] (handle_mm_fault+0xac/0xbc)
[ 2115.400491] [<c007f890>] (handle_mm_fault+0xac/0xbc) from [<c05a0114>] (do_page_fault+0x16c/0x2bc)
[ 2115.409456] [<c05a0114>] (do_page_fault+0x16c/0x2bc) from [<c0008358>] (do_DataAbort+0x30/0x98)
[ 2115.418168] [<c0008358>] (do_DataAbort+0x30/0x98) from [<c059ec5c>] (__dabt_usr+0x3c/0x40)
[ 2115.426432] Exception stack(0xde11bfb0 to 0xde11bff8)
[ 2115.431480] bfa0: 99517008 00000000 00160000 997b7000
[ 2115.439669] bfc0: 000ac758 00000076 00000af8 00000000 00000000 00400000 000000af 000aaa84
[ 2115.447844] bfe0: 00000000 bebe4a28 00027b98 b6e08458 20000010 ffffffff
[ 2115.454465] Mem-info:
[ 2115.456742] Normal per-cpu:
[ 2115.459537] CPU 0: hi: 186, btch: 31 usd: 114
[ 2115.464359] active_anon:120680 inactive_anon:0 isolated_anon:0
[ 2115.464359] active_file:0 inactive_file:0 isolated_file:0
[ 2115.464359] unevictable:2795 dirty:0 writeback:0 unstable:0
[ 2115.464359] free:713 slab_reclaimable:134 slab_unreclaimable:692
[ 2115.464359] mapped:235 shmem:0 pagetables:248 bounce:0
[ 2115.464359] free_cma:0
[ 2115.494952] Normal free:2852kB min:2852kB low:3564kB high:4276kB active_anon:482720kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:11180kB isolated(anon):0kB isolated(file):0kB present:524288kB managed:509032kB mlocked:0kB dirty:0kB writeback:0kB mapped:940kB shmem:0kB slab_reclaimable:536kB slab_unreclaimable:2768kB kernel_stack:376kB pagetables:992kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[ 2115.535787] lowmem_reserve[]: 0 0
[ 2115.539155] Normal: 1*4kB (M) 2*8kB (M) 1*16kB (M) 0*32kB 0*64kB 0*128kB 1*256kB (M) 1*512kB (R) 0*1024kB 1*2048kB (R) 0*4096kB = 2852kB
[ 2115.551745] 2795 total pagecache pages
[ 2115.555510] 0 pages in swap cache
[ 2115.558829] Swap cache stats: add 0, delete 0, find 0/0
[ 2115.564061] Free swap = 0kB
[ 2115.566948] Total swap = 0kB
[ 2115.577893] 131072 pages of RAM
[ 2115.581057] 899 free pages
[ 2115.583810] 3154 reserved pages
[ 2115.586951] 826 slab pages
[ 2115.589661] 263367 pages shared
[ 2115.592819] 0 pages swap cached
[ 2115.595964] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
[ 2115.603809] [ 707] 0 707 306 56 2 0 0 telnetd
[ 2115.611879] [ 711] 0 711 49 8 2 0 0 buttons-nwsp2
[ 2115.620491] [ 713] 0 713 307 82 2 0 0 sh
[ 2115.628153] [ 897] 0 897 121636 120805 240 0 0 xfs_repair
[ 2115.636504] Out of memory: Kill process 897 (xfs_repair) score 916 or sacrifice child
[ 2115.644334] Killed process 897 (xfs_repair) total-vm:486544kB, anon-rss:482612kB, file-rss:608kB
root@(none):/sbin #

Again, thanks for the help :-D

Peter
pfwaaa
 
Posts: 10
Joined: Sun Jan 18, 2015 5:57 pm

Re: Lacie 5big Network 2 - Raid Array Missing

Postby fvdw » Mon Feb 02, 2015 10:44 pm

remember this kernel runs from RAM and there is no swap, it is clear from the output that the kernel runs out of memory when running xfs_repair. The question is why. The 5big2 has 512 MB of RAM. Should be sufficient unless this command loads chunks of data in memory bigger then that :scratch
When googling this error it seems to be quit common

We can not add a swap memory because we have no disk

It seems xfs_repair has a option "-m" to limit the memory usage (value in Mbyte)
So you could use the command
Code: Select all
free

to see how much RAM is free and then run xfs_repair. Take into account that xfs_repair binary also takes some space and alos the files of mini-glibc, but those should be only a few MB's
I expect that at least 256 MB should be available
fvdw
Site Admin - expert
 
Posts: 13245
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Next

Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 3 guests

cron