From PrgmrWiki

Cold migration

  1. All of these steps should be run in screen so if the connection from your desktop is bad it won't die and to help run other commands while it is copying. Many of the steps are optional but help show how things are. Also make sure to read the man pages and look up the options of all commands used before using them!!!!!
  2. Run domucreate on the new dom0 with what the new package should be if its different from the old, if the old disk size is larger use that. If the old disk size will be the same it should be a little bit bigger if there weren't partitions before and now there will be. It doesn't matter what image you put but the 32 or 64 part should be appropriate for starting the old vps.
  3. Shutdown the domU on the old dom0.
  4. If there is a /dev/mapper/bootvolumeofthedomU copy it first.
    1. Run /sbin/fsck.ext2 or whatever on it to check if the filesystem is ok.
    2. login to the new dom0 with ssh -A
    3. On the new dom0 run
      ssh olddom0 "dd if=/dev/mapper/bootvolumeofthedomU" | dd of=thedomU_boot.ext2
    4. Run
      fsck.ext2 -n thedomU_boot.ext2
      It should be the same as it was on the old dom0 (hopefully ok).
  5. Then copy the main disk of the domU.
    1. Run
      /sbin/kpartx -av /dev/mapper/theolddomUdisk
      if its partitions aren't already showing. Its even better to run
      kpartx -dv /dev/mapper/theolddomUdisk
      kpartx -av /dev/mapper/theolddomUdisk
      to make sure the dom0 sees the current partitions. On these older servers though (hydra, boar, lion) by default there were no partitions but many users reinstalled with partitions and may not even have ext2 or ext3 filesystems. If they do have partitions, the new disk can be exactly the same size as the old. Run
      /sbin/tune2fs -l /dev/mapper/theolddomUdisk
      /sbin/fsck.ext2 -n /dev/mapper/theolddomUdisk
      or something on the disk itself and possible partition devices to see what they are.
      file -s
      or running file on a little piece of the beginning of the disk copied with dd (safer) can also help show what it is. If it can't be fscked, copy it anyway and hope it turns out ok. You can still start up the old domU again if it doesn't.
    2. While the the disk is copying over ssh, you can
      kill -USR1 theddprocessid
      to see how much progress it has made. You can also run
      ionice -c2 -n7 theddprocessid
      to make it best effort lowest io priority, especially helpful for very large transfers that will take a long time.
    3. If the old disk has partitions
      1. ssh theolddom0 "dd if=/dev/mapper/theolddomUdisk" | dd of=/dev/mapper/thenewdomUdisk
      2. kpartx -av /dev/mapper/thenewdomUdisk
      3. /sbin/fsck.ext3 -n /dev/mapper/thenewdomUdiskpartition
        It should be the same as on the old server.
    4. If the old disk doesn't have partitions
      1. Make a partition on the new disk big enough for the whole old disk.
      2. kpartx -av /dev/mapper/thenewdomUdisk
      3. ssh theolddom0 "dd if=/dev/mapper/theolddomUdisk" | dd of=/dev/mapper/thenewdomUdiskpartition
        If the partition isn't big enough, dd will fail almost at the end of copying here.
      4. /sbin/fsck.ext3 -n /dev/mapper/thenewdomUdiskpartition
        It should be the same as on the old server.
      5. mount /dev/mapper/thenewdomUdiskpartition /mnt/dst
      6. mount -o loop thedomU_boot.ext2 /mnt/src
      7. Copy the files from /mnt/src to /mnt/dst/boot
      8. Edit /mnt/dst/boot/grub/menu.lst to put /boot in front of the file paths.
      9. Edit /mnt/dst/etc/fstab to comment out /boot
      10. Unmount /mnt/src and /mnt/dst
      11. fsck again maybe.
  6. Try to boot up the vps. Hopefully it all worked! If it didn't, you can still hopefully start them back up again on the old dom0. A few times with very large copies, it would just die after about 60G and I haven't figured out what to do. Maybe copying the disk into a gzip file before copying over the network will help but it will take longer.