Problems with raid5

Re: Problems with raid5

Postby fvdw » Mon Aug 17, 2015 4:58 pm

Ok I sse, but what did e2fsck bring as info for sda8, sdb8....and md0
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Problems with raid5

Postby samrise » Mon Aug 17, 2015 5:26 pm

Code: Select all
root@nas:/ # e2fsck -f -v /dev/sda8
e2fsck 1.42.11 (09-Jul-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

          11 inodes used (0.00%, out of 60850176)
           0 non-contiguous files (0.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
     3870274 blocks used (1.59%, out of 243370693)
           0 bad blocks
           1 large file

           0 regular files
           2 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
------------
           2 files
root@nas:/ # e2fsck -f -v /dev/sdb8
e2fsck 1.42.11 (09-Jul-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

          11 inodes used (0.00%, out of 60850176)
           0 non-contiguous files (0.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
     3870274 blocks used (1.59%, out of 243368685)
           0 bad blocks
           1 large file

           0 regular files
           2 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
------------
           2 files
root@nas:/ # e2fsck -f -v /dev/sdc8
e2fsck 1.42.11 (09-Jul-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

          11 inodes used (0.00%, out of 60850176)
           0 non-contiguous files (0.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
     3870274 blocks used (1.59%, out of 243368685)
           0 bad blocks
           1 large file

           0 regular files
           2 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
------------
           2 files
root@nas:/ # e2fsck -f -v /dev/sdd8
e2fsck 1.42.11 (09-Jul-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

          11 inodes used (0.00%, out of 60850176)
           0 non-contiguous files (0.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
     3870274 blocks used (1.59%, out of 243368685)
           0 bad blocks
           1 large file

           0 regular files
           2 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
------------
           2 files
root@nas:/ # e2fsck -f -v /dev/sde8
e2fsck 1.42.11 (09-Jul-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

          11 inodes used (0.00%, out of 60850176)
           0 non-contiguous files (0.0%)
           0 non-contiguous directories (0.0%)
             # of inodes with ind/dind/tind blocks: 0/0/0
     3870274 blocks used (1.59%, out of 243368685)
           0 bad blocks
           1 large file

           0 regular files
           2 directories
           0 character device files
           0 block device files
           0 fifos
           0 links
           0 symbolic links (0 fast symbolic links)
           0 sockets
------------
           2 files


I've been doing some research and found this:

Warning: Using --assume-clean when creating a level 4,5 or 6 raid will almost certainly lead to massive coruption. Levels 4 and 5 should work with drives containing only zeros. Level 6 depends on implementation details.


If an array is still syncing, you may still proceed to creating filesystems, because the sync operation is completely transparent to the file system. Please note that the sync will need more time this way and if a drive happens to fail before the RAID sync finishes, then you're in trouble. It's generally recommend to wait until the sync is completed (or omit the sync by passing --clean during the array creation if the disks are empty.


So i'm going to drop --assume-clean and let it resync and then work on md0 (e2fsck, mke2fs, etc).
samrise
 
Posts: 115
Joined: Thu Jan 01, 2015 9:59 pm

Re: Problems with raid5

Postby samrise » Mon Aug 17, 2015 6:08 pm

I just noticed something interesting while looking at /proc/mdstat.

This is the example from tutorial:

root@Acrab:/ # watch -n 60 'cat /proc/mdstat'
Every 60s: cat /proc/mdstat 2014-12-01 11:36:44
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sdd8[2] sdc8[1] sdb8[0]
2927364412 blocks super 1.0 [3/2] [_UU]
[>.................] resync = 1.0% (29973696/2927364412) finish=399.7min speed=120785K/sec
unused devices: <none>
-


And this is mine:

root@nas:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sde8[4] sdd8[3] sdc8[2] sdb8[1] sda8[0]
3893898496 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
[>....................] resync = 2.6% (25650848/973474624) finish=7430.2min speed=2125K/sec

unused devices: <none>


:hammerhead :dontknow

Edit: sync just crashed.

dmesg

Code: Select all
Unable to handle kernel NULL pointer dereference at virtual address 0000000b
pgd = c4a20000
[0000000b] *pgd=04a49831, *pte=00000000, *ppte=00000000
Internal error: Oops: 1 [#1] ARM
Modules linked in: raid_class dm_raid dm_log_userspace dm_mirror dm_region_hash dm_log dm_snapshot dm_service_time dm_queue_length dm_round_robin dm_multipath dm_crypt dm_mod raid456 async_raid6_recov async_memcpy async_pq async_xor xor async_tx raid6_pq raid10 raid1 raid0 linear md_mod uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_core v4l2_common v4l2_int_device videodev iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi iscsi_trgt(O) nfsd exportfs nfs lockd sunrpc usblp fuse ntfs isofs cifs
CPU: 0    Tainted: G           O  (3.9.5 #25)
PC is at async_trigger_callback+0x1c/0x8c [async_tx]
LR is at raid_run_ops+0xcbc/0xd8c [raid456]
pc : [<bf23a2cc>]    lr : [<bf25b150>]    psr: a0000013
sp : c5cc1d08  ip : c5cc1d28  fp : c5cc1d24
r10: 00000000  r9 : c6316098  r8 : c04cd6dc
r7 : c5cc1ed8  r6 : 00000005  r5 : c0467d60  r4 : c5cc1d6c
r3 : ffffffff  r2 : 00000002  r1 : 00000000  r0 : c5cc1d6c
Flags: NzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
Control: 0005317f  Table: 04a20000  DAC: 00000017
Process md0_raid5 (pid: 21581, stack limit = 0xc5cc01b8)
Stack: (0xc5cc1d08 to 0xc5cc2000)
1d00:                   00000000 ffffffff c0467d60 00000005 c5cc1dac c5cc1d28
1d20: bf25b150 bf23a2c0 00001000 c63160e0 c5cc1d6c 00000005 c5cc1e04 bf254970
1d40: 00000000 ffa91170 00000005 00000005 ffffffff c7041af0 00000000 c037e090
1d60: 00000000 00000020 030ece80 00000004 ffffffff bf255c04 c6316098 00000000
1d80: 00000000 c6316098 c7391c00 00000005 c5cc1ed8 c5a73c00 c5cc1e04 00000000
1da0: c5cc1e8c c5cc1db0 bf25d7a0 bf25a4a4 00208040 c7820040 20000093 c7391d04
1dc0: c5cc1de4 c5cc1dd0 c003aefc c003ad60 c22bbd74 00000001 ffffffff c5cc1de8
1de0: ffffffff 00000000 c5cc1e1c c5cc1df8 c0038198 c0008cd8 ffffffff c22ba01c
1e00: 00000000 00000001 00000000 00000000 00000000 00000000 00000004 00000000
1e20: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 ffffffff
1e40: ffffffff 00000000 00000001 00000000 00000020 00000000 00000000 00000000
1e60: 00000000 00000008 00000001 c7391c00 c5cc1ed8 c5a73c00 00000000 00000000
1e80: c5cc1f24 c5cc1e90 bf25e540 bf25b760 c5cc1ecc c5cc1ea0 c028f364 c003a940
1ea0: 00000000 7fffffff 7fffffff c5cc1f38 c73e2648 c5cc0000 00000000 00000000
1ec0: c5cc1edc 91827364 c5cc1ec8 c5cc1ec8 c5cc1ed0 c5cc1ed0 c6316098 c6c73af8
1ee0: c6e86e18 c6e869b8 c6317218 c6237b78 c290c098 c37c2518 00000000 c73e2640
1f00: 7fffffff c5cc1f38 c73e2648 c5cc0000 00000000 00000000 c5cc1f64 c5cc1f28
1f20: bf1e3660 bf25e154 00000000 00000000 c7041ac0 c0034aac c5cc1f38 c5cc1f38
1f40: 00000000 c295bc3c 00000000 c73e2640 bf1e3518 00000000 c5cc1fac c5cc1f68
1f60: c0034164 bf1e3528 00000000 00000000 00000000 c73e2640 00000000 c5cc1f7c
1f80: c5cc1f7c 00000000 c5cc1f88 c5cc1f88 c295bc3c c00340b0 00000000 00000000
1fa0: 00000000 c5cc1fb0 c000d9d0 c00340c0 00000000 00000000 00000000 00000000
1fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
1fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
Backtrace:
[<bf23a2b0>] (async_trigger_callback+0x0/0x8c [async_tx]) from [<bf25b150>] (raid_run_ops+0xcbc/0xd8c [raid456])
 r6:00000005 r5:c0467d60 r4:ffffffff r3:00000000
[<bf25a494>] (raid_run_ops+0x0/0xd8c [raid456]) from [<bf25d7a0>] (handle_stripe+0x2050/0x29f4 [raid456])
[<bf25b750>] (handle_stripe+0x0/0x29f4 [raid456]) from [<bf25e540>] (raid5d+0x3fc/0x5b4 [raid456])
[<bf25e144>] (raid5d+0x0/0x5b4 [raid456]) from [<bf1e3660>] (md_thread+0x148/0x15c [md_mod])
[<bf1e3518>] (md_thread+0x0/0x15c [md_mod]) from [<c0034164>] (kthread+0xb4/0xc0)
 r8:00000000 r7:bf1e3518 r6:c73e2640 r5:00000000 r4:c295bc3c
[<c00340b0>] (kthread+0x0/0xc0) from [<c000d9d0>] (ret_from_fork+0x14/0x24)
 r7:00000000 r6:00000000 r5:c00340b0 r4:c295bc3c
Code: e5903004 e1a04000 e3530000 0a000006 (e593600c)
---[ end trace 29cc59a57d2257dd ]---
samrise
 
Posts: 115
Joined: Thu Jan 01, 2015 9:59 pm

Re: Problems with raid5

Postby fvdw » Mon Aug 17, 2015 6:28 pm

:pissed

To fill a partition with zero's is possible using /dev/zero and the dd command. On a 928 GB partition this can take a while.
Code: Select all
dd if=/dev/zero of=/dev/sda8 bs=1M


I think it becomes time that we try the 3.14.2 kernel and modules. This evening I will prepare a package
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Problems with raid5

Postby fvdw » Mon Aug 17, 2015 7:52 pm

Sorry to keep you waiting but I have visitors, so it will be later this evening
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Problems with raid5

Postby samrise » Mon Aug 17, 2015 7:53 pm

It's ok, take your time. No rush here. Btw, thank you very much for your help! Same for Jocko!
samrise
 
Posts: 115
Joined: Thu Jan 01, 2015 9:59 pm

Re: Problems with raid5

Postby Jocko » Mon Aug 17, 2015 7:56 pm

samrise wrote: just noticed something interesting while looking at /proc/mdstat.

And this is mine:
Code: Select all
root@nas:/ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid5 sde8[4] sdd8[3] sdc8[2] sdb8[1] sda8[0]
3893898496 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
[>....................] resync = 2.6% (25650848/973474624) finish=7430.2min speed=2125K/sec

unused devices: <none>


There is an issue with the raid should synchronize 3893898496 instead of 973474624 which is the size of each sdx8 ?

So something wrong with the kernel and the raid modules

This can explain why format failed.


So wait if with 3.14.2 kernel, this issue is fixed
Jocko
Site Admin - expert
 
Posts: 11529
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Problems with raid5

Postby fvdw » Mon Aug 17, 2015 8:33 pm

ok the new kernel and modules are inside the zip archive.
Unpack it and you will find two files

UIMAGE-3142-5BIG1-1
This is the kernel

5big1-3142-modules.tar
These contain the kernel modules insie the directory tree /lib/modules/3.14.2

To install the modules:
Copy the tar archive to the system root folder (/) using WinSCP

Connect with putty and unpack the archive
Code: Select all
tar -xvf 5big1-3142-modules.tar

now you should have a folder /lib/modules/3.14.2 with all the modules inside
You can use the same modprobe commands, when the new 3.14.2 kernel is running it will select automatically the 3.14.2 sub folder to load the modules

To install the kernel:
You have two options
(a) load it using the fvdw-sl console, using action load external kernel. For this you need to place the kernel inside the tftp subfolder of the fvdw-sl console that is running on your pc.
The disadvantage of this way is that you need to load it again this way if you reboot the NAS.
The advantage is that you can easily revert back to 3.9.5 kernel by rebooting the nas.
(b) write it to sda6
To do this copy it to the system root folder (/) using WinSCP, connect with putty and write it to sda 6
Code: Select all
dd if=UIMAGE-3142-5BIG1-1 of=/dev/sda6

reboot the NAS
The advantage of this method is that it will now load automatically the new kernel at every boot.

The kernel is tested by another user so it should run fine and you could consider method b
You do not have the required permissions to view the files attached to this post.
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Problems with raid5

Postby samrise » Mon Aug 17, 2015 8:55 pm

I did option b), but uname -a is still the same.

Code: Select all
root@nas:/ # dd if=UIMAGE-3142-5BIG1-1 of=/dev/sda6
3996+1 records in
3996+1 records out
root@nas:/ # reboot


Code: Select all
login as: root
root@nas's password:
root@nas:/ # uname -a
Linux nas.local 3.9.5 #25 Thu Feb 27 22:22:32 GMT+1 2014 armv5tel unknown unknown GNU/Linux
root@nas:/ #


Edit:

Code: Select all
root@nas:/ # mount /dev/sda5 test/
root@nas:/ # cd test/
root@nas:/test # ls
etc  installer.log  lost+found  nas_conf_db_ok.xml  tmp  usr  var
root@nas:/test # nano installer.log
root@nas:/test # ls -la
total 56
drwxrwxrwx   8 nobody nobody  4096 2015-08-17 22:57 .
drwxr-xr-x  25 root   root    4096 2015-08-17 22:58 ..
drwxrwxrwx   4 root   root    4096 2015-08-17 22:58 etc
-rw-r--r--   1 root   root     134 1970-01-01 01:27 installer.log
drwx------   2 root   root   16384 2000-01-01 20:40 lost+found
-rw-r--r--   1 nobody nobody  6712 2015-06-26 19:08 nas_conf_db_ok.xml
drwx------   2 root   root    4096 2015-06-26 19:03 .ssh
drwxrwxrwx   6 root   root    4096 2015-08-17 22:58 tmp
drwxr-xr-x   3 root   root    4096 1970-01-01 01:27 usr
drwxr-xr-x   3 root   root    4096 1970-01-01 01:27 var
root@nas:/test #


Content of /dev/sda6 is new so dd worked. Why is not booting with new kernel?

I'm about to loose my mind. :hairpull
samrise
 
Posts: 115
Joined: Thu Jan 01, 2015 9:59 pm

Re: Problems with raid5

Postby fvdw » Mon Aug 17, 2015 9:27 pm

did you in the past install anything in sdb6, sdc6,sdd6 or sde6 ?

this bootloader of lacie is funny and looks on mutiple places for the kernel

You could use method (a) or write the kernel also to /dev/sde6
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

PreviousNext

Return to Lacie 5Big Network vs1

Who is online

Users browsing this forum: No registered users and 2 guests