LXC Container Networking:NAT Bridge

Linux Containers (LXC) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host). It does not provide a virtual machine capabilities, but rather provides a virtual environment that has its own CPU, memory, block I/O, network, etc. space and the resource control mechanism

This is provided by namespaces and cgroups features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.

Setup LXC Runtime in Centos 7.X:

1.It requires the epel repository to download the lxc binaries.

yum install epel-release

2.Install LXC in centos thorugh OS package manager.

# yum -y install lxc lxc-templates libcap-devel libcgroup busybox wget bridge-utils lxc-extra

 

Note:

lxc                                       –      container runtime.

lxc-template                     –       tar archive images to create a container

libcap-devel,libcgroup   –       libararies

busybox                            –       Tiny utilities for small and embedded systems

wget                                   –       Download manager

bridge-utils                       –       Linux network bridge

lxc-extra                            –       lxc utilities

3.Verify the lxc installation and bootstrap configuration

#lxc-checkconfig

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.10.0-862.3.2.el7.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /bin/lxc-checkconfig

Above said all steps help you to establish the LXC runtime ,but before launching the container it is recommended to build the network topology for container.That ensure container become part of defined network after every launch of it.

In common the network topology widely used in small or medium environment is,

  1. Bridge Network – Makes container become member of your physical network infrastructure ,that means the traffic flow inward and outward without packet alteration.
  2. NAT Bridge – a standalone bridge with a private network that is not bridged to the host or a physical network.But help of iptables NAT out  going packets are masqueraded to achieve a internet traffic connection from container.

Here on we will discuss about the NAT bridge and various way to achieve it for your container internet access.

  1. libvirtd
  2. Manual Bridge Creation
  3. lxc-net script

Approach 1:  Libvirtd

After your LXC runtime geared up,verify does your environment for LXC come with any bridge ready for use.

# brctl show
bridge name     bridge id               STP enabled     interfaces

In many cases you may end up with result of seeing no bridge adapter to work.

So the easy way is install libvirt which brings you the network bridge virbr0 after every successful installation.

virbr0 is default a NAT bridge ,so you will not bother about traffic forwarding or masquerading rules.

# yum install libvirtd

# systemctl start lxc

# systemctl start libvirtd

# brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400fbb2a4       yes             virbr0-nic

Ensure the virbr0 bridge is backing your LXC by modifying below configuration file.

# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.flags = up

Now we are ready for pilot our first lxc container.

# lxc-create -t centos -n container-first

# lxc-start -n container-first -d    (run in daemon mode)

After spawn of a container,attach its tty console to gather container related information.

# lxc-attach -n container-first

Run a habitual ip command to know the launched container ip address.

# [root@container-first ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:5b:0b:ad:b0:8c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.122.94/24 brd 192.168.122.255 scope global dynamic eth0
       valid_lft 2029sec preferred_lft 2029sec
    inet6 fe80::fc5b:bff:fead:b08c/64 scope link
       valid_lft forever preferred_lft forever

Check the connectivity from container to host and internet.

[root@container-first ~]# ping 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.085 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.046 ms
^C
--- 192.168.122.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.046/0.065/0.085/0.021 ms

[root@container-first ~]# ping google.com
PING google.com (172.217.1.142) 56(84) bytes of data.
64 bytes from atl14s07-in-f142.1e100.net (172.217.1.142): icmp_seq=1 ttl=52 time=17.1 ms
64 bytes from atl14s07-in-f142.1e100.net (172.217.1.142): icmp_seq=2 ttl=52 time=17.1 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 17.170/17.173/17.176/0.003 ms

[root@container-first ~]# exit

Get Back to host shell,to understand the reason being container assigned with ip address lets try to understand the effect of libvirt.

Since LXC tweaked to utilize the virbr0,all spawn container interface will be attached to that bridge and get an ip address from same subnet that holded by virbr0.

# brctl show
bridge name     bridge id               STP enabled     interfaces
virbr0          8000.525400fbb2a4       yes             vethL91648
                                                        virbr0-nic

Ideally virbr0 IP 192.168.122.1 will be used as gateway for all the container associated with it.

# ifconfig virbr0
virbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:fb:b2:a4  txqueuelen 1000  (Ethernet)
        RX packets 4912  bytes 254284 (248.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8235  bytes 36363532 (34.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



Approach 2: Create a custom bridge

If you are not a fan of libvirtd or thinking it as added burden,the following approach helps and you can design your private network subnet for you container rather than the case where virbr0 comes with default one.

create so called virtual adapter ‘containerbr0‘ (anything you want to called) and configure with subnet IP address you want it.

# cat /etc/sysconfig/network-scripts/ifcfg-containerbr0
DEVICE=containerbr0
TYPE=Bridge
IPADDR=192.168.30.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none

Add a bridge adapter remain same name as ‘containerbr0’.

# brctl addbr containerbr0

# ifup containerbr0

[root@lxc-api-poc ~]# ifconfig containerbr0
containerbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.30.1 netmask 255.255.255.0 broadcast 192.168.30.255
inet6 fe80::6004:5dff:fe36:3ec4 prefixlen 64 scopeid 0x20<link>
ether fe:b5:96:85:45:cd txqueuelen 1000 (Ethernet)
RX packets 74 bytes 7050 (6.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 66 bytes 83442 (81.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

To make such a way to forward traffic from container to internet ,it require to include a NAT rule that should tell any traffic excluding subnet 192.168.30.0/24 go through host interface eth0 with masqueraded packets.

# iptables -t nat -A POSTROUTING -s 192.168.30.0/24 ! -d 192.168.30.0/24 -o eth0 -j MASQUERADE

Note: It is widely opened rule,if you worry about security a lot further tweak the rule as per your requirement.

# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target prot opt source destination

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.30.0/24 !192.168.30.0/24

Next tweak the LXC configuration in global file as follow,

# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = containerbr0
lxc.network.flags = up

Restart lxc systemd service.

# systemctl restart lxc

Launch a container and do following mandatory configuration,

# lxc-create -t centos -n container-two

# lxc-start -n container-two -d (run in daemon mode)

# brctl show containerbr0
bridge name bridge id STP enabled interfaces
containerbr0 8000.feb5968545cd no vethBKENC5

# lxc-attach -n container-two

Once you are into the container ,please configure IP address that belongs to identical subnet configured for ‘containerbr0’.

[root@container-two ~]# ip address add 192.168.30.10/24 dev eth0

[root@container-two ~]# ping 192.168.30.1
PING 192.168.30.1 (192.168.30.1) 56(84) bytes of data.
64 bytes from 192.168.30.1: icmp_seq=1 ttl=64 time=0.108 ms
64 bytes from 192.168.30.1: icmp_seq=2 ttl=64 time=0.046 ms

In addition configure a gateway,

# ip route add default via 192.168.30.1

# ip route
default via 192.168.30.1 dev eth0
192.168.30.0/24 dev eth0 proto kernel scope link src 192.168.30.10

Test you internet connectivity.

[root@container-two ~]# ping google.com
PING google.com (172.217.1.142) 56(84) bytes of data.
64 bytes from atl14s07-in-f142.1e100.net (172.217.1.142): icmp_seq=1 ttl=52 time=16.8 ms
64 bytes from atl14s07-in-f142.1e100.net (172.217.1.142): icmp_seq=2 ttl=52 time=17.0 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 16.871/16.967/17.064/0.162 ms

lxc-net will be published as a new article.

Leave a comment