Raid 5 failed on 5big1

Re: Raid 5 failed on 5big1

Postby totorweb » Fri Nov 01, 2013 5:09 am

Did i need an ext2.ko module to mount an ext2 filesystem ?
I created sdd1 using gdisk and mkfs.ext2.

Code: Select all
root@(none):/ # mount /dev/sdd1 /sdd1/
mount: mounting /dev/sdd1 on /sdd1/ failed: Invalid argument
root@(none):/ # mount /dev/sdd1 /sdd1/ -t ext2
mount: mounting /dev/sdd1 on /sdd1/ failed: No such device


Code: Select all
root@(none):/ # gdisk /dev/sdd
GPT fdisk (gdisk) version 0.8.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sdd: 2930277168 sectors, 1.4 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 00000000-0000-0000-0000-000000000000
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 2930277134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      2930277134   1.4 TiB     8300  Linux filesystem

totorweb
 
Posts: 46
Joined: Fri Oct 18, 2013 11:28 am

Re: Raid 5 failed on 5big1

Postby Mijzelf » Fri Nov 01, 2013 9:02 am

You can see which filesystems are supported by
Code: Select all
cat /proc/filesystems


Congratulations, BTW.
Mijzelf
 
Posts: 255
Joined: Wed Nov 21, 2012 9:12 am

Re: Raid 5 failed on 5big1

Postby fvdw » Fri Nov 01, 2013 9:02 am

ext3 is supported

you mounted a partition successfully here viewtopic.php?f=30&t=1557&start=10#p11591

or the formatting was not done properly were did you find mkfs.ext2 that you used ?

To my knowledge the command (applet) is included in busybox is mke2fs

That won't format partition bigger then 2 TB
for that you need a 64 bits based mke2fs. Attached a static binary able to handle partition bigger then 2 TB


mke2fs-64.zip

---edit
I double checked it, in my quest to reduce the size below 2 MB I also disabled ext2, I only left ext3 in place, post below
You do not have the required permissions to view the files attached to this post.
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Raid 5 failed on 5big1

Postby fvdw » Fri Nov 01, 2013 9:09 am

I double checked the kernel config
It seems I have gone far in stripping it to reduce size
I have set this in the kernel config
Code: Select all
# File systems
#
# CONFIG_EXT2_FS is not set
CONFIG_EXT3_FS=y
CONFIG_EXT3_DEFAULTS_TO_ORDERED=y
#


so indeed ext 2 is not enabled, you need to make an ext3 file system which almost same as ext2, only difference is the journal
so for example a command like this

Code: Select all
mke2fs -j -m 1 /dev/sdd1
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Raid 5 failed on 5big1

Postby totorweb » Fri Nov 01, 2013 11:40 am

sdd is 1,5TB so the 32bit version will work.

In my memories i used mkfs.ext2 and mke2fs doesn't speek to me.

Ok so i will try this.

Thx
totorweb
 
Posts: 46
Joined: Fri Oct 18, 2013 11:28 am

Re: Raid 5 failed on 5big1

Postby totorweb » Fri Nov 01, 2013 12:24 pm

same problem ... :sob

Code: Select all
root@(none):/sbin # fdisk -l /dev/sdd

Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
256 heads, 63 sectors/track, 181688 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdd1               1      181688  1465132000+ 83 Linux


Code: Select all
root@(none):/sbin # mke2fs -j -m 1 /dev/sdd1 -n
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
91578368 inodes, 366283000 blocks
3662830 blocks (1%) reserved for the super user
First data block=0
Maximum filesystem blocks=369098752
11179 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848


Code: Select all
root@(none):/sbin # mount /dev/sdd1 /MNT_sdd1 -t ext3
mount: mounting /dev/sdd1 on /MNT_sdd1 failed: Invalid argument

root@(none):/sbin # mount /dev/sdd1 /MNT_sdd1
mount: mounting /dev/sdd1 on /MNT_sdd1 failed: Invalid argument


Code: Select all
root@(none):/sbin # cat /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
nodev   proc
nodev   tmpfs
nodev   sockfs
nodev   pipefs
nodev   anon_inodefs
nodev   devpts
        ext3
nodev   ramfs
        vfat
        msdos
        xfs
totorweb
 
Posts: 46
Joined: Fri Oct 18, 2013 11:28 am

Re: Raid 5 failed on 5big1

Postby Mijzelf » Fri Nov 01, 2013 12:34 pm

Have a look in the kernel message ring buffer using 'dmesg'. It can give some more information.

Oh, BTW, can you post the output of
Code: Select all
ls -l /dev/sdd*
Mijzelf
 
Posts: 255
Joined: Wed Nov 21, 2012 9:12 am

Re: Raid 5 failed on 5big1

Postby totorweb » Fri Nov 01, 2013 12:42 pm

no entry in dmesg when formatting or mounting.


Code: Select all
root@(none):/dev # ls -l /dev/sdd*
brw-rw-rw-    1 root     root        8,  16 Jan  1 00:03 /dev/sdd
brw-rw-rw-    1 root     root        8,  17 Jan  1 00:19 /dev/sdd1
brw-rw-rw-    1 root     root        8,  26 Oct 27  2013 /dev/sdd10
brw-rw-rw-    1 root     root        8,  18 Oct 27  2013 /dev/sdd2
brw-rw-rw-    1 root     root        8,  19 Oct 27  2013 /dev/sdd3
brw-rw-rw-    1 root     root        8,  20 Oct 27  2013 /dev/sdd4
brw-rw-rw-    1 root     root        8,  21 Oct 27  2013 /dev/sdd5
brw-rw-rw-    1 root     root        8,  22 Oct 27  2013 /dev/sdd6
brw-rw-rw-    1 root     root        8,  23 Oct 27  2013 /dev/sdd7
brw-rw-rw-    1 root     root        8,  24 Oct 27  2013 /dev/sdd8
brw-rw-rw-    1 root     root        8,  25 Oct 27  2013 /dev/sdd9


why the system still see /dev/sdd2 to sdd10 ?
totorweb
 
Posts: 46
Joined: Fri Oct 18, 2013 11:28 am

Re: Raid 5 failed on 5big1

Postby fvdw » Fri Nov 01, 2013 12:46 pm

the standalone kernel has a default list of all dev nodes.
Seems sdd and sdd1 are updated, suppose you have run udevstart

one question what happens if you do this
Code: Select all
mkdir /sdb1
mount /dev/sdb1 /sdb1
fvdw
Site Admin - expert
 
Posts: 13471
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Raid 5 failed on 5big1

Postby Mijzelf » Fri Nov 01, 2013 12:51 pm

totorweb wrote:no entry in dmesg when formatting or mounting.
Really? That means the mount binary already rejects the command, and it's not passed to the kernel. Are you sure your mountpoint exists? Or there is some filter on the ring buffer. Try 'dmesg -r' (for raw) or 'dmesg -n 7'.
why the system still see /dev/sdd2 to sdd10 ?
It doesn't. The files in /dev/ are just files, which exist until you delete them. In this case /dev/sdd and /dev/sdd1 are overwritten by the udev tool, but the others keep their old value.
If you want to know what the kernel 'sees', try
Code: Select all
cat /proc/partitions
Mijzelf
 
Posts: 255
Joined: Wed Nov 21, 2012 9:12 am

PreviousNext

Return to Lacie 5Big Network vs1

Who is online

Users browsing this forum: No registered users and 7 guests