fvdw-sl on WD My Cloud

Re: fvdw-sl on WD My Cloud

Postby matt_max » Sun Aug 14, 2022 2:27 pm

Hi guys, Sorry for a delay. I was away for a few days.

@Jocko: Yes - that's right: all of my tests are performed on my cloned smaller disk. Here is the content of /etc/mdadm/mdadm.conf:
Code: Select all
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <ignore>

# instruct the monitoring daemon where to send mail alerts
#MAILADDR root

# This file was auto-generated on Thu, 30 Aug 2012 16:25:22 -0700
# by mkconf 3.1.4-1+8efb9d1MAILADDR root

There are no usr/etc/ or usr/etc/mdadm/. And here is the /etc/fstab:
Code: Select all
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0

## found that Access DLNA can sometimes temporarily use up to 70M of the /tmp space
## increasing to 100M maximum
## setting number of inodes to 20K
tmpfs /tmp tmpfs rw,size=100M,nr_inodes=20K 0 0

/dev/md1 / ext3 defaults,noatime,nodiratime,data=writeback,barrier=0 0 0

As you can see there is /dev/md1 array defined here. When I changed it to md0 I don't see any errors in console output.
In the meantime I've checked partition sda5 (using dd and HxD) and it is still empty. After that I've checked sda7 and looks like it hasn't been changed. My new dump is exactly the same as @fvdw put in one of his previous posts (config-new-trial1.zip).
matt_max
Donator VIP
Donator VIP
 
Posts: 124
Joined: Fri Apr 03, 2015 9:56 am
Location: Warsaw, Poland

Re: fvdw-sl on WD My Cloud

Postby fvdw » Wed Aug 17, 2022 1:40 pm

Thx for the inputs but I think they use autodetect of raid by the kernel. That means the raid arrays are not yet assembled and running so I doubt if the settings you mention are used at that time to detect the array.
A good description on autodetect raid by a kernel can be found here LINK

Now we see following relevant in info in the console outputs of the trial with disk with not modified sda7 and disk with modified sda7 (are these the same disks?)

First no problem or crc check error when reading sda7 by barebox
Both original en modified report only this line
Code: Select all
sataenv: partition 7 loading  size 522

In case of crc check failure barebox would have reported an crc error.

That the modification of sda7 is done can be seen in kernel command line passed to the kernel.
Code: Select all
original
commandline: console=ttyS0,115200n8, init=/sbin/init root=/dev/md1 raid=autodetect rootfstype=ext3 rw noinitrd debug initcall_debug swapaccount=1 panic=3 mac_addr=00:90:A9:D8:5E:AD model=sq serial= board_test= btn_status=0
arch_number: 1094

modidified sda7
commandline: console=ttyS0,115200n8, init=/sbin/init root=/dev/md0 raid=autodetect rootfstype=ext3 rw noinitrd debug initcall_debug swapaccount=1 panic=3 mac_addr=00:90:A9:D8:5E:AD model=sq serial= board_test= btn_status=0
arch_number: 1094

note that root=/dev/md1 is changed in /dev/md0

At the end of kernel loading the raid auto detect is done

Disk with original sda7
Code: Select all
[   10.462304] async_continuing @ 1 after 0 usec
[   10.466738] md: Waiting for all devices to be available before autodetect
[   10.473563] md: If you don't use raid, use raid=noautodetect
[   10.479276] async_waiting @ 1
[   10.482261] async_continuing @ 1 after 0 usec
[   10.487595] md: Autodetecting RAID arrays.
[   10.566395] md: Scanned 2 and added 2 devices.
[   10.570865] md: autorun ...
[   10.573676] md: considering sda2 ...
[   10.577328] md:  adding sda2 ...
[   10.580599] md:  adding sda1 ...
[   10.584747] md: created md1
[   10.587586] md: bind<sda1>
[   10.590365] md: bind<sda2>
[   10.593140] md: running: <sda2><sda1>
[   10.597257] bio: create slab <bio-1> at 1
[   10.601321] md1: WARNING: sda2 appears to be on the same physical disk as sda1.
[   10.608691] True protection against single-disk failure might be compromised.
[   10.616070] md/raid1:md1: active with 2 out of 2 mirrors
[   10.621543] md1: detected capacity change from 0 to 2147418112
[   10.627635] md: ... autorun DONE.
[   10.646932]  md1: unknown partition table
[   10.657387] kjournald starting.  Commit interval 5 seconds
[   10.714863] EXT3-fs (md1): using internal journal
[   10.719630] EXT3-fs (md1): mounted filesystem with ordered data mode
[   10.726074] VFS: Mounted root (ext3 filesystem) on device 9:1.
[   10.732475] async_waiting @ 1
[   10.735464] async_continuing @ 1 after 0 usec
[   10.740533] Freeing init memory: 320K


Now disk with modified sda7
Code: Select all
[   10.412823] async_continuing @ 1 after 0 usec
[   10.418151] md: Autodetecting RAID arrays.
[   10.495237] md: Scanned 2 and added 2 devices.
[   10.499725] md: autorun ...
[   10.502537] md: considering sda2 ...
[   10.506165] md:  adding sda2 ...
[   10.509434] md:  adding sda1 ...
[   10.513581] md: created md127
[   10.516593] md: bind<sda1>
[   10.519372] md: bind<sda2>
[   10.522139] md: running: <sda2><sda1>
[   10.526240] bio: create slab <bio-1> at 1
[   10.530303] md127: WARNING: sda2 appears to be on the same physical disk as sda1.
[   10.537847] True protection against single-disk failure might be compromised.
[   10.545225] md/raid1:md127: active with 2 out of 2 mirrors
[   10.550863] md127: detected capacity change from 0 to 2147418112
[   10.557128] md: ... autorun DONE.
[   10.560715] EXT3-fs (md0): error: unable to read superblock
[   10.615474] List of all partitions:
[   10.618997] 0800       488386584 sda  driver: sd
[   10.623663]   0801         2097152 sda1 71b0912e-71c3-4684-8e29-13940f247eaf
[   10.630790]   0802         2097152 sda2 2e86a28f-5ccb-49a9-b88a-f0244a3d7095
[   10.637909]   0803          512000 sda3 6e040161-ce4b-4f02-8ec1-e75262a64736
[   10.645013]   0804       419430400 sda4 ce87c6ab-0f04-45eb-b1e8-3a26c7157818
[   10.652134]   0805          102400 sda5 01fe79d5-2825-44ba-ba29-bc5dadbb311f
[   10.659249]   0806          102400 sda6 da5f355f-3d54-423e-a256-03869b80451e
[   10.666364]   0807            2048 sda7 959fb43a-e908-4799-b15e-bbd84862c5e2
[   10.673468]   0808            3072 sda8 cf5d77bc-27de-4118-8e46-07a49ac80d0b
[   10.680587] 097f         2097088 md127  (driver?)
[   10.685327] No filesystem could mount root, tried:  ext3
[   10.690713] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,0)

Note that in this case autodetect finds a raid1 but with name md127 :scratch why md127 ??

Then the kernel tries to mount the detected raid array
On original disk it uses md1 and all goes fine
On the modified disk it tries to use md0 but as that array is not present it fails and kernel crashes as no root file system could be mounted.

My question is what did you change on the disk with modified sda7 with result that kernel sees the raid1 as /dev/md127 instead of /dev/md0 ?

Please compare partition tables of the original and modified disk
fvdw
Site Admin - expert
 
Posts: 13239
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: fvdw-sl on WD My Cloud

Postby fvdw » Wed Aug 17, 2022 1:47 pm

I edited my previous post
fvdw
Site Admin - expert
 
Posts: 13239
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: fvdw-sl on WD My Cloud

Postby matt_max » Wed Aug 17, 2022 1:51 pm

I really do not know. AFAIR when I connect spare disc to my linux machine system autodetect array partitions and automatically assembled it with under /dev/md127 but I thought didn't matter as far as I do not write this settings in any file on the system.

P.S. All of above tests were made on the spare (smaller) disc. I'm not using original disc.

P.P.S. I think I found it! Check this link. Look at this:
This happens when an array is created on one system and then physically installed to a different system. In this case, the second array was created using mdadm...
The RAID array’s superblock stores the array name.

This mdX array has metadata 0.9 so I can try to fix it using Option #2 described here.
Last edited by matt_max on Sun Aug 21, 2022 12:44 pm, edited 1 time in total.
matt_max
Donator VIP
Donator VIP
 
Posts: 124
Joined: Fri Apr 03, 2015 9:56 am
Location: Warsaw, Poland

Re: fvdw-sl on WD My Cloud

Postby fvdw » Sat Aug 20, 2022 5:42 pm

This mdX array has metadata 0.9 so I can try to fix it using Option #2 described here.

Any news?
fvdw
Site Admin - expert
 
Posts: 13239
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: fvdw-sl on WD My Cloud

Postby matt_max » Sun Aug 21, 2022 1:02 pm

I've connected smaller, test disc to my linux machine and recreate array metadata with:
Code: Select all
mdadm --stop /dev/md127
mdadm --assemble --update=super-minor /dev/md0 /dev/sdb1 /dev/sdb2

...and then I have:
Code: Select all
sudo mdadm --query --detail /dev/md0
/dev/md0:
           Version : 0.90
     Creation Time : Tue Aug  9 23:29:32 2022
        Raid Level : raid1
        Array Size : 2097088 (2047.94 MiB 2147.42 MB)
     Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
      Raid Devices : 2
     Total Devices : 2
   Preferred Minor : 0
       Persistence : Superblock is persistent

       Update Time : Sun Aug 14 16:16:31 2022
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              UUID : b3b931db:75eab405:997ceb89:519129b4
            Events : 0.2022

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       18        1      active sync   /dev/sdb2


I think it is ok now. I do not see any problems in the console output. WD starts correctly and I can open web interface.
matt_max
Donator VIP
Donator VIP
 
Posts: 124
Joined: Fri Apr 03, 2015 9:56 am
Location: Warsaw, Poland

Re: fvdw-sl on WD My Cloud

Postby fvdw » Sun Aug 21, 2022 2:26 pm

:thumbup
Now next step.
Take out the disk from wd cloud and connect it to your linux pc
assemble the /dev/md0 array and mount it
Go to mountpoint and deleted all files and folders
You could before doing that make a tar archive of all these files. Might be handy later on to substract some files we might need like kernel modules. It easier to extract them from a tar archive then a partition image

After you have deleted all files extract the attached archive in the mountpoint of the array.
So basically we replaced all files in the array by the ones in the attached archive.
Umount the array and stop it.
Put the disk back into your wd cloud and boot and view what happens.
The new root filesystem contains a basic linux setup and should, in this case, tries to start a telnet server
This last might fail in case the ethernet interface is not initialized, this may happen as from wd cloud firmware boot I see they load some modules related to ethernet interface
You do not have the required permissions to view the files attached to this post.
fvdw
Site Admin - expert
 
Posts: 13239
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: fvdw-sl on WD My Cloud

Postby matt_max » Sun Aug 21, 2022 5:00 pm

I think I messed something. First I tried to zeroed array with dd if=/dev/zero of=/dev/md0 but system hangs. Then I wasn't able to mount it (maybe due to errors with dd). Then I recreate partition with
Code: Select all
dd if=/image_of_md.bin of=/dev/md0

After that I just delete all files with rm -r /dev/md0 and extract contents of tar.gz file using
Code: Select all
tar -xvf p10rootfs.tar.gz -C /dev/md0

But now system is unable to boot. Here is the output from console.
As you can see system cannot detect partition :scratch
Code: Select all
md0: unknown partition table
matt_max
Donator VIP
Donator VIP
 
Posts: 124
Joined: Fri Apr 03, 2015 9:56 am
Location: Warsaw, Poland

Re: fvdw-sl on WD My Cloud

Postby fvdw » Sun Aug 21, 2022 7:26 pm

that error about partition table is no problem. It is expected as /dev/md0 doesn't has a partition table.

Anyhow assembling the array and detecting and mounting the file system seems to be succesfull
The problem is it cannot find /sbin/init, here is the relevant part of the output

Code: Select all
[   10.483139] md: Waiting for all devices to be available before autodetect
[   10.489987] md: If you don't use raid, use raid=noautodetect
[   10.495689] async_waiting @ 1
[   10.498673] async_continuing @ 1 after 0 usec
[   10.503991] md: Autodetecting RAID arrays.
[   10.580673] md: Scanned 2 and added 2 devices.
[   10.585143] md: autorun ...
[   10.587983] md: considering sda2 ...
[   10.591605] md:  adding sda2 ...
[   10.594873] md:  adding sda1 ...
[   10.598141] md: created md0
[   10.600954] md: bind<sda1>
[   10.603740] md: bind<sda2>
[   10.606523] md: running: <sda2><sda1>
[   10.610650] bio: create slab <bio-1> at 1
[   10.614715] md0: WARNING: sda2 appears to be on the same physical disk as sda1.
[   10.622084] True protection against single-disk failure might be compromised.
[   10.629471] md/raid1:md0: not clean -- starting background reconstruction
[   10.636325] md/raid1:md0: active with 2 out of 2 mirrors
[   10.641767] md0: detected capacity change from 0 to 2147418112
[   10.647846] md: ... autorun DONE.
[   10.647938] md: resync of RAID array md0
[   10.647948] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[   10.647957] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[   10.647973] md: using 2048k window, over a total of 2097088k.
[   10.694397]  md0: unknown partition table
[   10.704900] kjournald starting.  Commit interval 5 seconds
[   10.817725] EXT3-fs (md0): using internal journal
[   10.822460] EXT3-fs (md0): recovery complete
[   10.826786] EXT3-fs (md0): mounted filesystem with ordered data mode
[   10.833216] VFS: Mounted root (ext3 filesystem) on device 9:0.
[   10.839618] async_waiting @ 1
[   10.842607] async_continuing @ 1 after 0 usec
[   10.847677] Freeing init memory: 320K
[   10.929259] Failed to execute /sbin/init.  Attempting defaults...
[   10.957138] Kernel panic - not syncing: No init found.  Try passing init= option to kernel. See Linux Documentation/init.txt for guidance.

So md0 is build with using sda1 and sda2, I assume you have created the array using sdb1 and sdb2 on your linux PC, correct

Now init is present in /sbin folder of the root file system I have sent you, it is a symlink to busybox. It could be that busybox failed to run although this is very unlikely

What you could do is connect the disk to your linux pc and do following
assemble /dev/md0
Do not mount /dev/md0
format /dev/md0 in ext3 (mke2fs -t ext3 -m 1 /dev/md0)
After that mount the array on a mountpoint, I assume in this example /md0
mount -t ext3 /dev/md0 /md0
Then delete all files in it, there should be no files in it as we formatted it, but better check to be sure
Now extract the root filesystem I have send you in the mountpoint
Assume your mountpoint has name /md0 verfiy that /md0/sbin/init is present and that busybox is in /md0//bin and that it has executable permissions (which should be if you extracted the root file system)
If all ok umount the array and stop it. Then put the disk again in your wd cloud and try booting.
If not ok post here what you found
fvdw
Site Admin - expert
 
Posts: 13239
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: fvdw-sl on WD My Cloud

Postby Mijzelf » Sun Aug 21, 2022 8:07 pm

[ 0.000000] Built 1 zonelists in Zone order, mobility grouping off. Total pages: 3773
[ 0.000000] Memory: 44MB 192MB = 236MB total

This kernel uses 64kB memory pages. Most binaries can not run on that. The SoC supports both 4kB and 64kB. AFAIK it's a .config setting which does the switch. The ZyXEL NAS540 originally came with 64kB pages, (on firmware 5.0x) but in firmware 5.10 that was changed to 4kB. The upgrade package didn't contain a new bootloader, so it must be in the kernel settings.
Mijzelf
 
Posts: 254
Joined: Wed Nov 21, 2012 9:12 am

PreviousNext

Return to Development

Who is online

Users browsing this forum: No registered users and 1 guest

cron