amuck-landowner

Running LXC containers with Debian

wlanboy

Content Contributer
My love to LXC started again after from jarland about LXC and Ubuntu.

Ubuntu (latest stuff) and LXC are nice partners and play well.
The web-based GUI is done in the right way.
But basically you do not need the GUI and you do not need Ubuntu to use LXC.

My tutorial today will show the basic low-end consolish Debian way to work with LXC.

So what is LXC? They say:

Current LXC uses the following kernel features to contain processes:

  •     Kernel namespaces (ipc, uts, mount, pid, network and user)
  •     Apparmor and SELinux profiles
  •     Seccomp policies
  •     Chroots (using pivot_root)
  •     Kernel capabilities
  •     Control groups (cgroups)
As such, LXC is often considered as something in the middle between
a chroot on steroids and a full fledged virtual machine.
The goal of LXC is to create an environment as close as possible as a
standard Linux installation but without the need for a separate kernel.
Licensing:

LXC is free software, most of the code is released under the terms of the
GNU LGPLv2.1+ license, some Android compatibility bits are released
under a standard 2-clause BSD license and some binaries and templates
are shipped under the GNU GPLv2 license.
And how can I work with it under Debian on a KVM?

  1. Install linux headers

    apt-get install linux-headers-$(uname -r)

  2. Install lxs stuff
    Code:
    apt-get install lxc bridge-utils (libvirt-bin)
    You might not need the libvirt-bin stuff - only if you want to use the libvirt bridge stuff.

Next thing is enabling the cgroups:


nano /etc/fstab
#Add this line at the end
cgroup /sys/fs/cgroup cgroup defaults 0 0

After that following command should show that everything is fine:


lxc-checkconfig

Output should look like this:


--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Next thing is to add and configure networking:

You can use LXC, libvirt or interfaces to generate the bridge which is connecting the containers to your local network.

  1. LXC

    nano /etc/default/lxc

    Add following content:


    # Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
    # containers. Set to "false" if you'll use virbr0 or another existing
    # bridge, or mavlan to your host's NIC.
    USE_LXC_BRIDGE="true"

    # If you change the LXC_BRIDGE to something other than lxcbr0, then
    # you will also need to update your /etc/lxc/lxc.conf as well as the
    # configuration (/var/lib/lxc/<container>/config) for any containers
    # already created using the default config to reflect the new bridge
    # name.
    # If you have the dnsmasq daemon installed, you'll also have to update
    # /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
    LXC_BRIDGE="lxcbr0"
    LXC_ADDR="10.0.3.1"
    LXC_NETMASK="255.255.255.0"
    LXC_NETWORK="10.0.3.0/24"
    LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
    LXC_DHCP_MAX="253"

    LXC_SHUTDOWN_TIMEOUT=120

    This is a copy of the default configuration of the ubuntu package.
    I am used to the 10.0.3.0/24 network but you can setup the network on your own.
  2. libvirt
    Second way to configure networking: [libvirt-bin]
    Define the network, to start it and to enable autostart:

    #First line not needed for Debian 7!
    virsh -c lxc:/// net-define /etc/libvirt/qemu/networks/default.xml
    virsh -c lxc:/// net-start default
    virsh -c lxc:/// net-autostart default

    Output is:
    Code:
    ~# virsh -c lxc:/// net-define /etc/libvirt/qemu/networks/default.xml
    error: Failed to define network from /etc/libvirt/qemu/networks/default.xml
    error: operation failed: network 'default' already exists with uuid 7b950023-411a-5a72-b969-9568bc68908b
    
    ~# virsh -c lxc:/// net-start default
    Network default started
    
    ~# virsh -c lxc:/// net-autostart default
    Network default marked as autostarted
    We can look to the libvirt network config:


    cat /var/lib/libvirt/network/default.xml

    Code:
    <!--
    WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
    OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
      virsh net-edit default
    or other application using the libvirt API.
    -->
    
    <network>
      <name>default</name>
      <uuid>7b950023-411a-5a72-b969-9568bc68908b</uuid>
      <forward mode='nat'/>
      <bridge name='virbr0' stp='on' delay='0' />
      <ip address='192.168.122.1' netmask='255.255.255.0'>
        <dhcp>
          <range start='192.168.122.2' end='192.168.122.254' />
        </dhcp>
      </ip>
    </network>
  3. network
    Third way to configure networking:
    Code:
    nano /etc/network/interfaces
    Code:
    #Bridge setup - add at the buttom of the file
    auto br0
      iface br0 inet static
      bridge_ports eth0
      bridge_fd 0
      address 10.0.3.2
      netmask 255.255.255.0
      gateway 10.0.3.1
      dns-nameservers 8.8.8.8
For me the third way is the easiest - a straightforward bridge for the lxc containers.
But it is also fine to use the LXC generated bridge (doing the same as you).

If you are allready running KVM you can use the bridged network from libvirt too.

Now we need some iptables magic to enable the lxc containers to get internet access:


iptables -t filter -A INPUT -i lxcbr0 -j ACCEPT
iptables -t filter -A OUTPUT -o lxcbr0 -j ACCEPT
iptables -t filter -A FORWARD -i lxcbr0 -j ACCEPT
iptables -A FORWARD -s 10.0.3.0/24 -o eth0 -j ACCEPT
iptables -A FORWARD -d 10.0.3.0/24 -o lxcbr0 -j ACCEPT

iptables -A POSTROUTING -t nat -j MASQUERADE

Code:
 iptables -t filter -A INPUT -i virbr0 -j ACCEPT
 iptables -t filter -A OUTPUT -o virbr0 -j ACCEPT
 iptables -t filter -A FORWARD -i virbr0 -j ACCEPT
 iptables -A FORWARD -s 192.168.122.0/24 -o eth0 -j ACCEPT
 iptables -A FORWARD -d 192.168.122.0/24 -o virbr0 -j ACCEPT

 iptables -A POSTROUTING -t nat -j MASQUERADE
The other way round is to route ports from the host to one container - e.g. for a VestaCP instance:

  • -i eth0: the one interface of your host which should listen
  • --to-destination: ip and port of the lxc container as target

iptables -t nat -A PREROUTING -m tcp -p tcp --dport 20 -j DNAT -i eth0 --to-destination 10.0.3.3:20
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 21 -j DNAT -i eth0 --to-destination 10.0.3.3:21
iptables -t nat -A PREROUTING -m udp -p udp --dport 53 -j DNAT -i eth0 --to-destination 10.0.3.3:53
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 80 -j DNAT -i eth0 --to-destination 10.0.3.3:80
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 25 -j DNAT -i eth0 --to-destination 10.0.3.3:25
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 143 -j DNAT -i eth0 --to-destination 10.0.3.3:143
iptables -t nat -A PREROUTING -m tcp -p tcp --dport 587 -j DNAT -i eth0 --to-destination 10.0.3.3:587


Now - finally - the time to create our first container:

I will call it "vnc".


lxc-create -n vnc -t debian

Manpage for lxc-create: http://lxc.sourceforge.net/man/lxc-create.html

You will be asked for quite a lot of things but the important ones are the debian version,
what package sources and for the root password.

First container create might consume quite a bit of time (to get all the files).

We then should take a look at the container configuration:


nano /var/lib/lxc/vnc/config

Adding following lines:


lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.ipv4 = 10.0.3.3/24
lxc.network.ipv4.gateway = 10.0.3.1

So this time we use the interface "virbr0" from the host to take the ip "10.0.3.3" and using the gateway "10.0.3.1" (host ip) to get access to the internet.

Next step: Enable autostart of the container:


ln -s /var/lib/lxc/vnc/config /etc/lxc/auto/vnc

And start/stop the container:


lxc-start -n vnc -d
lxc-stop -n vnc

"lxc-list" will list all containers:


RUNNING
vnc

FROZEN

STOPPED

You can enter the console of the container by:
 


lxc-console -n vnc

Remeber following shotcuts:


Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself

Well the console should show up ... if the debian package for the templates would not be broken.
The fix is allready available for ubuntu but in debian you might have to wait for that fix.
There are patches available but looking to the problem itself ... well the ttys are missing.

But that can be fixed easily:


chroot /var/lib/lxc/vnc/rootfs
mknod -m 666 /dev/tty1 c 4 1
mknod -m 666 /dev/tty2 c 4 2
mknod -m 666 /dev/tty3 c 4 3
exit

Next issue might be the resolve.conf


nano /var/lib/lxc/vnc/rootfs/etc/resolv.conf

Just ensure that the dns servers are correct.

So back to the console:


lxc-console -n vnc

And everything is working again:


#Output:
Debian GNU/Linux 7 vnc tty1

vnc login:

So login with root and re install the ssh server:


apt-get update && apt-get install --reinstall openssh-server

Next time you restart the container you can login via ssh to the LXC container:


ssh 10.0.3.3

You can even forward the ssh port to one of the LXC containers.

That's it.

For me LXC is a good tool to separate services.

It is easy to try control panels like VestaCP because you do not have to reinstall your main KVM.

Just start a container and install what ever you want - to cannot harm your main vps.

You can even install different versions of a lib or server. Each on a own instance.
 
Top
amuck-landowner