Upgrading 2TB Disks to 4TB Disks

Upgrading 2TB Disks to 4TB Disks

Postby hvymetal86 » Sat Aug 31, 2019 12:17 am

I'm planning on upgrading my 2TB disks in my 5big Network vs2 to 4TB drives. I already have one 4TB drive in the healthy array that replaced a failed 2TB drive (sdc). The rest are still 2TB drives. My understanding is that I can use the safely remove option in the RAID volume setup to take out each 2TB disk in turn, replace it with a 4TB one, then get the NAS to rebuild onto the 4TB disk. Rinse and repeat. Then when all done I should be able to expand the raid volume.

I've prepared in this way:
  1. Updated firmware to 18.1 today
  2. Ran printenv via u-boot and confirmed that it supports GPT and 64bit, the result was:
    Code: Select all
    uboot_capabilities=gpt,lba64
  3. Confirmed the array is currently healthy
  4. My most important data is backed up already in case this goes badly
There are two things I'm unsure about and want to figure out before I start.
  1. Can I use the above "safely remove, insert new disk, rebuild" via gui for sda since it holds the fvdw firmware? If not, what are the steps I will need?
  2. Once all drives are replaced, can I expand the volume via the web gui? Is it automatically expanded once all drives are an equivalent larger size? Or do I need to expand it via command line? If the last, I'd like help with the command(s) to ensure I don't screw it up.

Note: The hardware clock on my NAS is bad but I've already set it in the past to use internet time and while I get a warning while applying certain settings, they still apply correctly and I was previously informed this shouldn't really cause problems past the warning. http://plugout.net/viewtopic.php?f=26&t=2852&start=10#p29210

Full u-boot printenv in case it matters:
Code: Select all
Marvell>> printenv
printenv
baudrate=115200
loads_echo=0
rootpath=/mnt/ARM_FS/
netmask=255.255.255.0
console=console=ttyS0,115200 mtdparts=nand_mtd:0xa0000@0(uboot)ro,0xff00000@0x100000(root)
CASset=min
MALLOC_len=1
ethprime=egiga0
bootargs_root=root=/dev/nfs rw
bootargs_end=:::DB88FXX81:eth0:none
image_name=uImage
standalone=fsload 0x2000000 $(image_name);setenv bootargs $(console) root=/dev/mtdblock0 rw ip=$(ipaddr):$(serverip)$(bootargs_end) $(mvPhoneConfig); bootm 0x2000000;
ethmtu=1500
eth1mtu=1500
mvPhoneConfig=mv_phone_config=dev0:fxs,dev1:fxs
mvNetConfig=mv_net_config=(00:11:88:0f:62:81,0:1:2:3),mtu=1500
usb0Mode=host
yuk_ethaddr=00:00:00:EE:51:81
nandEcc=1bit
netretry=no
rcvrip=169.254.100.100
loadaddr=0x02000000
autoload=no
stderr=serial
mainlineLinux=no
enaMonExt=no
enaCpuStream=no
enaWrAllo=no
pexMode=RC
disL2Cache=no
setL2CacheWT=yes
disL2Prefetch=yes
enaICPref=yes
enaDCPref=yes
sata_dma_mode=yes
netbsd_en=no
vxworks_en=no
disaMvPnp=no
enaAutoRecovery=yes
uboot_capabilities=gpt,lba64
start_lump=lump 3
pre_lump=lump 1
resetdisk=ide reset
bootdelay=0
boot_fail=lump
kernel_addr=0x800000
productType_env=BIG5_KW
primaryPart=6
secondaryPart=A
boot_usb=usb start;usbboot 0x800000 0:1;bootm;
resetFlag_env=0
bootargs=console=ttyS0,115200 root=/dev/sda7 ro reset=0 productType=PRO_KW
bootcmd=run disk_disk
mtdids=nand0=nand_mtd
mtdparts=mtdparts=nand_mtd:1m(u-boot),16m(uImage),-(root)
boot_nand=setenv bootargs console=ttyS0,115200 ${mtdparts} root=/dev/mtdblock2 ro reset=${resetFlag_env} productType=${productType_env}; nboot ${kernel_addr} uImage; bootm ${kernel_addr}
boot_disk10=if disk ${kernel_addr} 5:${primaryPart}; then setenv rootfs /dev/sde7; else run boot_nand; fi
boot_disk9=if disk ${kernel_addr} 6:${primaryPart}; then setenv rootfs /dev/sdd7; else run boot_disk10; fi
boot_disk8=if disk ${kernel_addr} 1:${primaryPart}; then setenv rootfs /dev/sdc7; else run boot_disk9; fi
boot_disk7=if disk ${kernel_addr} 2:${primaryPart}; then setenv rootfs /dev/sdb7; else run boot_disk8; fi
boot_disk6=if disk ${kernel_addr} 3:${primaryPart}; then setenv rootfs /dev/sda7; else run boot_disk7; fi
boot_disk5=if disk ${kernel_addr} 5:${secondaryPart}; then setenv rootfs /dev/sde7; else run boot_disk6; fi
boot_disk4=if disk ${kernel_addr} 6:${secondaryPart}; then setenv rootfs /dev/sdd7; else run boot_disk5; fi
boot_disk3=if disk ${kernel_addr} 1:${secondaryPart}; then setenv rootfs /dev/sdc7; else run boot_disk4; fi
boot_disk2=if disk ${kernel_addr} 2:${secondaryPart}; then setenv rootfs /dev/sdb7; else run boot_disk3; fi
boot_disk1=if disk ${kernel_addr} 3:${secondaryPart}; then setenv rootfs /dev/sda7; else run boot_disk2; fi
boot_disk=if test ${resetFlag_env} -eq 0; then run boot_disk1; else run boot_disk6; fi
disk_disk=run boot_disk; setenv bootargs console=ttyS0,115200 root=${rootfs} ro reset=${resetFlag_env} productType=${productType_env}; bootm ${kernel_addr};
ethaddr=00:D0:4B:93:4A:84
eth1addr=00:D0:4B:93:4A:85
ethact=egiga0
ipaddr=192.168.69.2
ncip=192.168.69.41
serverip=192.168.69.41
stdin=nc
stdout=nc

Environment size: 3030/131068 bytes
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Upgrading 2TB Disks to 4TB Disks

Postby fvdw » Mon Sep 02, 2019 5:31 pm

Do you also want to replace sda by a 4 TB disk?

If that is the case then you cannot do that without complete new install.

You could copy the system partitions but the problem will be I think to copy the raid data partition to a new disk. Jocko has more knowledge about raid so maybe he can comment..
A trial could be first replace the 4 disks sdb, sdc, sdd, sde one by one and add them to the raid. Yes only 2 TB will be used. Then take out all 4TB disks and put only sda disk and a new blank 4TB disk in slot 1 and 2. Then load standalone kernel and make with standalone kernel a partition table on the 4 TB disk (sdb) and copy partition images of sda1,2,5,6,7 to sdb disk using dd command and set sdb8 partition as raid. Then take out old sda and put sdb as sda disk and add the 4 other 4 TB disks you already prepared with raid and rebuild the array to add sda. After that grow the array to max size to utilize full capacity of the disks.

Note. Above is only a guess from my side how it could be done, I advise to wait for advise of Jocko if there is another way and if the above can work according him.
fvdw
Site Admin - expert
 
Posts: 13245
Joined: Tue Apr 12, 2011 2:30 pm
Location: Netherlands

Re: Upgrading 2TB Disks to 4TB Disks

Postby hvymetal86 » Tue Sep 03, 2019 2:35 am

Yes, I need to replace SDA with a 4TB disk as well as all 5 disks are part of the RAID5 array. From a previous failure, SDC is already a 4TB disk but the rest are 2TB ones. I don't have the storage to fully backup the array so I need to find a way to do this without losing the data. As I said before, I have backups of the important stuff I can't replace if something goes wrong, but I don't want to attempt it knowing I'll lose stuff for sure.

I'd greatly appreciate Jocko weighing in as well but appreciate your help so far.
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Upgrading 2TB Disks to 4TB Disks

Postby Jocko » Fri Sep 06, 2019 11:43 am

Hi

Can you wait next week as I am currently busy ?

Any how the first step will be to install the firmware on the new sda disk by using fvdw console. Of course you have first to plug out all other disks and to note what disk is in what slot.

After this step you have to restore the content of the former sda5 partition on the new sda5.
For this step, do a tarball of rw_fs folder.
So do
Code: Select all
cd /rw_fs
tar -czf /direct-usb/fvdw/my-sda5.gz .

Then get this file and put it in the folder tftp (subfolder used by fvdw-sl console) on your laptop ( mysda5.gz file will be available in the share fvdw from your laptop)

Now with your new sda disk and a telnet access provided by fvdw-sl console, upload mysda5.gz
Code: Select all
tftp -l mysda5.gz -r mysda5.gz -g ip-pc
ip-pc is the IP address of your laptop

So overwrite sda5 content
Code: Select all
mkdir /sda5
mount /dev/sda5 /sda5
tar -xzf /my-sda5.gz -C /sda5


Stop your nas. Keep your new sda disk and plug the old sd[bcde] disks in the same original order.

Then restart the nas. If all is ok you should have a volume Vol-A (sda8) and a degraded raid5 (missing sda8)

If it is the case, you can add the new sda8 in your raid5 to restore the redundancy by using the web-interface (disk setup menu -> click on your raid volume name)
Note: re-syncing will take many hours (around 2 days)
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France

Re: Upgrading 2TB Disks to 4TB Disks

Postby hvymetal86 » Fri Sep 06, 2019 9:47 pm

Thanks for helping out. I can wait until next week, no problem.

In the meantime, I'll start replacing the 2TB disks that aren't sda and see if I can't do any other prep based on your instructions in the meantime.
hvymetal86
Donator VIP
Donator VIP
 
Posts: 121
Joined: Mon May 11, 2015 3:42 pm

Re: Upgrading 2TB Disks to 4TB Disks

Postby Jocko » Sat Sep 07, 2019 8:52 am

Hi

Ok for next week
hvymetal86 wrote:In the meantime, I'll start replacing the 2TB disks that aren't sda and see if I can't do any other prep based on your instructions in the meantime.
This must be done disk by disk and at each replacing you need to wait to have a RAID clean status. With 2TB disks this will take around 1,5 days for re-syncing. So with 3 disks changing, this step will take 4-5 days

You will need to wait to change sda that your raid get again redundancy

Note: in all case you will be able to restore your original raid with your 2TB disks (old sda and the 3 other disk) but with a degraded status (no redundancy) if you have noted their former slot :-D
Jocko
Site Admin - expert
 
Posts: 11367
Joined: Tue Apr 12, 2011 4:48 pm
Location: Orleans, France


Return to Lacie 5big Network vs2

Who is online

Users browsing this forum: No registered users and 6 guests