LXC/Network examples

From Gentoo Wiki
< LXC
Jump to:navigation Jump to:search
Warning
The information on this page is outdated and needs verification.

LXC#Network_configuration

Example configuration

veth in router mode

In this example, we have enp5s0 as the host interface with IP address is 192.168.1.100. The container's configuration file assigns 192.168.10.101 to the interface and 192.168.10.100 to the gateway.

FILE ~/.config/lxc/guest.conf
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.name = eth0
lxc.net.0.ipv4.address = 192.168.10.101/24
lxc.net.0.ipv4.gateway = <b>192.168.10.100</b>

# add the following to automate the steps described in the paragraphs below
lxc.hook.version = 1
lxc.net.0.script.up = /path/to/script/up.sh
FILE /path/to/script/up.sh
#!/bin/bash

ip addr add 192.168.10.100/24 dev $LXC_NET_PEER

A virtual interface will appear after starting the container:

root #ip link
6: vethCOB3OK@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:b7:4a:99:aa:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0

Setting the IP of the virtual interface (vethCOB3OK) to the address of the container's gateway (192.168.10.100) and adding the masquerade routes to the firewall will give internet access to the guest:

root #ip addr add 192.168.10.100/24 dev vethCOB3OK
root #ip addr
6: vethCOB3OK@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:b7:4a:99:aa:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.10.100/24 scope global vethCOB3OK
       valid_lft forever preferred_lft forever
    inet6 fe80::fcb7:4aff:fe99:aa3e/64 scope link 
       valid_lft forever preferred_lft forever
root #nft add rule nat postrouting oif enp5s0 masquerade

If the command bellow failed, try to call this one first:

root #nft -f /usr/share/nftables/ipv4-nat.nft

Packet forwarding may be needed in the firewall configuration. See detailed information in next sections.

Host configuration for VLANs inside the bridge which are connected to container's virtual Ethernet pair device

Let's assume that we have a host with enp2s0 device connected to provider LAN network which connects to the Internet (WAN) through it using ppp0 interface. We also have our private LAN network on the enp3s6 interface side. As long as we don't have many spare network interfaces and we also want some container's network isolation let's create another VLAN interface (enp3s6.1) on the host assigned to our private LAN network's interface enp3s6. Then we put it inside the bridge br0.1 as a port.

FILE /etc/conf.d/net
# VLAN (802.1q)
vlans_enp3s6="1"
# bridge ports defined empty to avoid DHCP being run for their configuration (bridge will have 1 IP)
config_enp3s6_1="null"

# Bridge (802.1d)
# To add port to bridge dynamically when the interface comes up
bridge_add_enp3s6_1="br0.1"
# Give the bridge an IP address - a static one
config_br0_1="192.168.10.1/24"
# One of the ports of bridge require extra configuration - VLAN enp3s6.1 on enp3s6 interface - we need to depend on it like so
rc_net_br0_1_need="net.enp3s6"

# Note that it is important to include 'bridge_forward_delay_br0=0' and 'bridge_hello_time_br0=1000' in order
# to bring the interface up quickly. Other values will cause network packets
# to be dropped for the first 30 seconds after the bridge has become active.
# This in turn could prevent DHCP from working.
bridge_forward_delay_br0_1=0
bridge_hello_time_br0_1=1000
bridge_stp_state_br0_1=0

Then create a bridge interface, restart the enp3s6 interface to get enp3s6.1, and put bridge interface to startup:

root #cd /etc/init.d/
root #ln -s net.lo net.br0.1
root #cd ~
root #rc-service net.enp3s6 restart
root #rc-service net.br0.1 start
root #rc-update add net.br0.1

You will have something like the following configuration:

root #ip addr
3: enp3s6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global enp3s6
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
4: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 10.55.1.101/24 brd 10.52.1.255 scope global enp2s0
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
5: enp3s6.1@enp3s6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.1 state UP group default 
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
6: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1400 qdisc pfifo_fast state UNKNOWN group default qlen 3
    link/ppp 
    inet 76.54.32.101 peer 76.54.20.10/32 scope global ppp0
       valid_lft forever preferred_lft forever
8: br0.1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.1/24 brd 192.168.10.255 scope global br0.1
       valid_lft forever preferred_lft forever
    inet6 xxxx::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever

Now start the container with veth assigned to our bridge br0.1. You'll get another network interface on the host's side which looks like this:

root #ip addr
...
10: vethB004H3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0.1 state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 xxxx::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever

Both our host's enp3s6.1 VLAN and container's virtual Ethernet pair device vethB004H3 are ports of our bridge br0.1:

root #brctl show
bridge name     bridge id               STP enabled     interfaces
br0.1           8000.blablablabla       no              enp3s6.1
                                                        vethB004H3

Host configuration with NAT networking (nftables)

Lets now give the Internet to the container. We'll use Nftables for that. (For iptables see next section). As long as we don't want container access to our private network LAN or our provider's LAN we'll give only access to the ppp0 WAN device to the container. Let's assume you already have configuration on your host similar to the following Nftables/Examples#Simple_stateful_router_example. Then you'll have to add several rules to it into according places.

FILE /home/rt/scripts/nft.sh
#!/bin/bash
 
nft="/sbin/nft";
...
LAN_PRIVATE_LXC=192.168.10.1/24
export WAN=ppp0
...
#4LXC
${nft} add rule nat postrouting oifname ${WAN} ip saddr ${LAN_PRIVATE_LXC} masquerade;
...
#4LXC
${nft} add rule filter forward ip saddr ${LAN_PRIVATE_LXC} ct state new accept;
echo ACCEPT ${LAN_PRIVATE_LXC} ALL;
...
/etc/init.d/nftables save;

This will give you Internet access inside container. You can later create more isolated containers inside each separate bridge br0.X or connect several container's interfaces inside one br0.Y.

Host configuration with NAT networking (iptables)

For simple network access from container to outside world via NAT using iptables - we can masquerade all connections from our container network to outside world via device, that connected to internet.

Before this, use the already created device net.br0.1 (see topic above):

FILE /etc/lxc/lxc-usernet
lxc veth br0.1 2

and your guest container have something like this (for full listing of this file and configuration, please, read next section or unprivileged container section)

FILE ~/.config/lxc/guest.conf
# Other configuration here
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0.1
lxc.net.0.name = eth0
lxc.net.0.ipv4 = 192.168.10.101/24
lxc.net.0.ipv4.gateway = 192.168.10.1
lxc.net.0.hwaddr = b6:65:81:93:cb:a0
# Maybe, guid mapping here

First, enable masquerade for IPV4 :

root #echo 1 > /proc/sys/net/ipv4/ip_forward

If you want to enable it at every boot time:

FILE /etc/sysctl.conf
...
net.ipv4.ip_forward = 1
...

Configure NAT to accept and masquerade all connections from container to outside. Command use output device enp5s0. Please, set-up correct device name of your network/wifi network card (you can find it via ifconfig output)

root #iptables -P FORWARD ACCEPT
root #iptables -t nat -A POSTROUTING -s 192.168.10.1/24 -o enp5s0 -j MASQUERADE

To save this rule for feature boot-ups:

root #service iptables save

Now, start the container and check networking:

user $ping 8.8.8.8

Guest configuration for a virtual Ethernet pair device connected by bridge

The guest network configuration resides in the guest's /etc/lxc/<lxcname>/config file. To auto-generate it we will use distributive-specific template scripts, but we need some network configuration base for generation. We will use /etc/lxc/guest.conf as such base config file. Documentation for both of this files is accessible with: man lxc.conf.

The configuration should include the following network-related lines:

FILE /etc/lxc/guest.conf
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0.1
lxc.network.name = eth0
#lxc.network.ipv4 = 192.168.10.101/24
#lxc.network.hwaddr = b6:65:81:93:cb:a0
Note
If you are not using DHCP inside the container to get an IP address, then just delete the 'lxc.network.hwaddr' line, and manually specify the IP you want to use next to lxc.network.ipv4.

If you are using DHCP inside the container to get an IP address, then run it once as shown. LXC will generate a random MAC address for the interface. To keep your DHCP server from getting confused, you will want to use that MAC address all the time. So find out what it is, and then uncomment the 'lxc.network.hwaddr' line and specify it there.

Note
If you have compiled bridge netfilter into your kernel, the LXC guest will only be able to ping the host and not other computers on your LAN or the internet, since all network traffic from the bridge is filtered by the kernel for routing. (See [1])

The solution is to disable all bridge-nf-* filters in /proc/sys/net/bridge, eg. "for f in /proc/sys/net/bridge/bridge-nf-*; do echo 0 > $f; done";

You can permanently disable the bridge-nf-* filters by setting each to '0' in /etc/sysctl.conf:

FILE /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
bridge-nf-filter-pppoe-tagged = 0
bridge-nf-pass-vlan-input-dev = 0

or e.g. by creating a file with those same settings in /etc/sysctl.d/99-bridge-nf-dont-pass.conf:

Alternatively, you can avoid above trouble with bridge-netfilter by setting correctly in-kernel bridge settings or turn some of them off. For example, I have the following bridge-related kernel config options and don't have anything inside /proc/sys/net/bridge/ at all with working LXC inside the bridge br0.1:

root #grep BRIDGE /usr/src/linux/.config
CONFIG_BRIDGE_NETFILTER=m
# CONFIG_NF_TABLES_BRIDGE is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y

Adjusting guest config of the container after using template script

If you got unworkable network inside the container (after using template script) then you always can adjust your guest configuration on the host using /etc/lxc/<lxcname>/config file. For example:

FILE /etc/lxc/alpha/config
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0.1
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.10.101/24
lxc.network.ipv4.gateway = 192.168.10.1

You can also always change network config inside container by adjusting it's configuration files (after login into container), for example:

FILE /etc/resolv.conf
nameserver 8.8.8.8