Skip to content

openVZ

[Debian] Installing RHEL6-based openvz kernel

My low-power machine required VSwap (a new feature of OpenVZ that is currently only supported in RHEL6-based openvz kernels). Instead of compiling the whole kernel, thanks to the wiki, I was able to find a nice document. Just grab the kernel RPM and do the following.

Installation:

apt-get install alien fakeroot
fakeroot alien -k vzkernel-2.6.32-042stab021.1.i686.rpm.rpm
sudo dpkg -i vzkernel_2.6.32-042stab021.1_i386.deb
sudo update-initramfs -c -k 2.6.32-042stab021.1
sudo update-grub

To boot rhel6 kernel I edited the grub.cfg and placed it on the top of the previous kernels. Its a crap method, I am sure there is a better one to change the order of kernels. Source

http://wiki.openvz.org/Vswap
http://wiki.openvz.org/Install_kernel_from_rpm_on_debian
  1. Get the latest kernel and utils from Download/kernel/rhel6 (or Download/kernel/rhel6-testing) and Download/utils. You need

    vzkernel
    vzkernel-devel
    vzctl-core
    vzctl
    ploop-lib
    ploop
    vzquota
    
    vzkernel-devel is optional.
    
  2. Install fakeroot and alien.

    apt-get install fakeroot alien
    
  3. Convert all the RPMs to debs using alien.

    fakeroot alien --to-deb --scripts --keep-version vz*.rpm ploop*.rpm
    
  4. Install debs.

    dpkg -i vz*.deb ploop*.deb
    

If you're having problem about overwriting files from other packages, try adding --force-overwrite option.

  1. Modify /boot/grub/menu.lst. See “configuring the bootloader” of Quick installation.

  2. Edit /etc/sysctl.conf. See “sysctl” of Quick installation.

  3. Make OpenVZ boot automatically.

    update-rc.d vz defaults
    update-rc.d vzeventd defaults
    

Bind Mounts OpenVZ

Recent Linux kernels support an operation called 'bind mounting' which makes part of a mounted filesystem visible at some other mount point. See 'man mount' for more information. Bind mounts can be used to make directories on the hardware node visible to the container.

Filesystem layout

OpenVZ uses two directories. Assuming our container is numbered 777, these directories are:

  • VE_PRIVATE: $VZDIR/private/777
  • VE_ROOT: $VZDIR/root/777

Note: $VZDIR is usually /vz, on Debian systems however this is /var/lib/vz. In this document this is further referred to as $VZDIR -- substitute it with what you have.

VE_PRIVATE is a place for all the container files. VE_ROOT is the mount point to which VE_PRIVATE is mounted during container start (or when you run vzctl mount

Manual mount example

On the HN we have a directory /home which we wish to make available (shared) to container 777. The correct command to issue on the HN is:

mount --bind /home $VZDIR/root/777/home

The container must be started (or at least mounted) and the destination directory must exist. The container will see this directory mounted like this:

# df
Filesystem           1K-blocks      Used Available Use% Mounted on
simfs                 10485760    298728  10187032   3% /
ext3                 117662052 104510764   7174408  94% /home

During the container stop vzctl unmounts that bind mount, so you have to mount it again when you start the container for the next time. Luckily there is a way to automate it.

Make the mount persistent

Put a mount script in OpenVZ configuration directory (/etc/vz/conf/) with the name _CTID_.mount (where _CTID_ is container ID, like 777). This script will be executed every time you run vzctl mount or vzctl start for a particular container. If you need to the same for all containers, use the global mount script named vps.mount. From any mount script you can use the following environment variables:

  • ${VEID} -- container ID (like 777).
  • ${VE_CONFFILE} -- container configuration file (like /etc/vz/conf/777.conf)

Now, in order to get the value of VE_ROOT you need to source both the global OpenVZ configuration file, and then the container configuration file, in that particular order. This is the same way vzctl uses to determine VE_ROOT.

Mount script example

Here is an example of such a mount script (it can either be /etc/vz/conf/vps.mount or /etc/vz/conf/_CTID_.mount)

#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
mount -n --bind /mnt/disk ${VE_ROOT}/mnt/disk

After creating script please make it executable by issuing "chmod +x CTID.mount" at command line otherwise vm fails to start

Unmount script example

For unmounting a filesystem, /etc/vz/conf/vps.umount or /etc/vz/conf/_CTID_.umount script can be used in the same way:

#!/bin/bash
source /etc/vz/vz.conf
source ${VE_CONFFILE}
umount ${VE_ROOT}/mnt/disk

Note: _CTID_.umount script is not strictly required, since vzctl tries to unmount everything on CT stop. But you'd better have it anyway.

umount scripts could cause trouble and errors on VM start and might not be required if using the -n option on mount. read forum post When mounting whitout -n option, umount script become required, but display errors, because of recursivity of umount procedure initiated before by libvzctl. Even if no option -n was specified at mount, and no umount script was run, system file /etc/mtab could become wrong in the HN, causing trouble to commands like df.

Read-only bind mounts

Since Linux kernel 2.6.26, bind mounts can be made read-only. The trick is to first mount as usual, and then remount it read-only:

mount -n --bind /home $VZDIR/root/777/home
mount -n --bind -oremount,ro $VZDIR/root/777/home

With some kernels you need to add the sourcedirectory also: mount -n --bind -oremount,ro /home $VZDIR/root/777/home Sometimes it is usefull to have a folder read-only mounted in a VPS, but also be able to put files in that directory. If you want that, just create an other directory and simlink the read only files into that folder:

vzctl exec2 777 "mkdir /addfileshere && ln -s /home/* /addfileshere/"

Now the /addfileshere folder is fully writable and it even feels like it is possible to delete files (but that are only the simlinks).

VZdump:

Backup all containers:

vzdump --compress --dumpdir /home/backup --stop --all

Backup 102 container:

vzdump --compress --dumpdir /home/backup --stop CTID

Backup 102 container with default dir:

vzdump --compress --stop CTID

Use SCP to copy dump file:

scp /vz/dump/vzdump-CTID.tgz [email protected]:/home

Restore: If you want to run both VMs (the original one and the clone) at the same time, you must change the IP address and hostname of the clone before you start it.

vzrestore /home/vzdump-100.tgz 200

vzctl set CTID --hostname test2.example.com --save 
vzctl set CTID --ipdel 192.168.0.102 --save 
vzctl set CTID --ipadd 192.168.0.250 --save 
vzctl start CTID

Creating Container:

cid=1161
cd /vz/template/cache/
wget http://download.openvz.org/template/precreated/centos-5-x86_64.tar.gz
vzctl create ${cid} --ostemplate centos-5-x86_64 --config vps.basic
vzctl set ${cid} --hostname [HOSTNAMEHERE] --save
vzctl set ${cid} --ipadd [IP] --save
vzctl set ${cid} --nameserver [IP] --save
vzctl set ${cid} --ram 512M --swap 1G --save
vzctl set ${cid} --diskspace 6G:7G --save
vzctl start ${cid} 
vzctl exec ${cid} passwd 
vzctl enter ${cid}

If unlimited on the particular partion where containers are created:

CTID.conf
DISK_QUOTA=no  
DISKSPACE="unlimited"

Starting/Suspending all containers at once

for ctid in `vzlist -Ho ctid`; do vzctl suspend $ctid; done
for ctid in `vzlist -SHo ctid`; do vzctl start $ctid; done

OpenVZ and DNSMASQ

Source:

http://www.blackmanticore.com/750200c59a9e88b84ab2bdd68e391954
http://forum.parallels.com/pda/index.php/t-284332.html

When running dnsmasq inside a VPS on an OpenVZ server, you may get an error while trying to start up dnsmasq (this is in particular the case for Debian):

Starting DNS forwarder and DHCP server: dnsmasq
dnsmasq: setting capabilities failed: Operation not permitted

This is because dnsmasq does not run as root (which is a good thing). What happens is that dnsmasq gets started as root, then attempts to set privileged functions to the dnsmasq user before changing user from root to that user. When setting these capabilities fails, you get the above error. The reason for failing is usually because either the kernel is missing the required features, or, in case of OpenVZ, the permissions are not passed on to the VPS. The latter can be resolved easily by adding these to the VPS config.

Solution: To resolve, simply add the necessary configuration parameters to the VPS config by running these:

vzctl set CTID --capability setuid:on --save
vzctl set CTID --capability net_admin:on --save
vzctl set CTID --capability net_raw:on --save

Replace CTID with the ID of the VPS you're editing. Note that you will have to restart the VPS for the changes to take effect. When done right, dnsmasq will start properly.

Notice

Another but bad solution is to run dnsmasq as root. This prevents the need for the capabilities permission to be set, but is potentially a security risk. As there's a much better solution (setting the permissions in the VPS) available, don't do this.

In /etc/dnsmasq.conf

user=root
Back to top