Migrating
From PrgmrWiki
Cold migration
- All of these steps should be run in screen so if the connection from your desktop is bad it won't die and to help run other commands while it is copying. Many of the steps are optional but help show how things are. Also make sure to read the man pages and look up the options of all commands used before using them!!!!!
- Run domucreate on the new dom0 with what the new package should be if its different from the old, if the old disk size is larger use that. If the old disk size will be the same it should be a little bit bigger if there weren't partitions before and now there will be. It doesn't matter what image you put but the 32 or 64 part should be appropriate for starting the old vps.
- Shutdown the domU on the old dom0.
- If there is a /dev/mapper/bootvolumeofthedomU copy it first.
- Run /sbin/fsck.ext2 or whatever on it to check if the filesystem is ok.
- login to the new dom0 with ssh -A
- On the new dom0 run
ssh olddom0 "dd if=/dev/mapper/bootvolumeofthedomU" | dd of=thedomU_boot.ext2
- Run
fsck.ext2 -n thedomU_boot.ext2
It should be the same as it was on the old dom0 (hopefully ok).
- Then copy the main disk of the domU.
- Run
/sbin/kpartx -av /dev/mapper/theolddomUdisk
if its partitions aren't already showing. Its even better to runkpartx -dv /dev/mapper/theolddomUdisk
thenkpartx -av /dev/mapper/theolddomUdisk
to make sure the dom0 sees the current partitions. On these older servers though (hydra, boar, lion) by default there were no partitions but many users reinstalled with partitions and may not even have ext2 or ext3 filesystems. If they do have partitions, the new disk can be exactly the same size as the old. Run/sbin/tune2fs -l /dev/mapper/theolddomUdisk
and/or/sbin/fsck.ext2 -n /dev/mapper/theolddomUdisk
or something on the disk itself and possible partition devices to see what they are.file -s
or running file on a little piece of the beginning of the disk copied with dd (safer) can also help show what it is. If it can't be fscked, copy it anyway and hope it turns out ok. You can still start up the old domU again if it doesn't. - While the the disk is copying over ssh, you can
kill -USR1 theddprocessid
to see how much progress it has made. You can also runionice -c2 -n7 theddprocessid
to make it best effort lowest io priority, especially helpful for very large transfers that will take a long time. - If the old disk has partitions
ssh theolddom0 "dd if=/dev/mapper/theolddomUdisk" | dd of=/dev/mapper/thenewdomUdisk
kpartx -av /dev/mapper/thenewdomUdisk
/sbin/fsck.ext3 -n /dev/mapper/thenewdomUdiskpartition
It should be the same as on the old server.
- If the old disk doesn't have partitions
- Make a partition on the new disk big enough for the whole old disk.
kpartx -av /dev/mapper/thenewdomUdisk
ssh theolddom0 "dd if=/dev/mapper/theolddomUdisk" | dd of=/dev/mapper/thenewdomUdiskpartition
If the partition isn't big enough, dd will fail almost at the end of copying here./sbin/fsck.ext3 -n /dev/mapper/thenewdomUdiskpartition
It should be the same as on the old server.mount /dev/mapper/thenewdomUdiskpartition /mnt/dst
mount -o loop thedomU_boot.ext2 /mnt/src
- Copy the files from /mnt/src to /mnt/dst/boot
- Edit /mnt/dst/boot/grub/menu.lst to put /boot in front of the file paths.
- Edit /mnt/dst/etc/fstab to comment out /boot
- Unmount /mnt/src and /mnt/dst
- fsck again maybe.
- Run
- Try to boot up the vps. Hopefully it all worked! If it didn't, you can still hopefully start them back up again on the old dom0. A few times with very large copies, it would just die after about 60G and I haven't figured out what to do. Maybe copying the disk into a gzip file before copying over the network will help but it will take longer.