Serveurs Privés Virtuels (VPS) - Network unreachable in LXD container configured to use OVH failover IP (with netplan)
BMPCreated with Sketch.BMPZIPCreated with Sketch.ZIPXLSCreated with Sketch.XLSTXTCreated with Sketch.TXTPPTCreated with Sketch.PPTPNGCreated with Sketch.PNGPDFCreated with Sketch.PDFJPGCreated with Sketch.JPGGIFCreated with Sketch.GIFDOCCreated with Sketch.DOC Error Created with Sketch.
Frage

Network unreachable in LXD container configured to use OVH failover IP (with netplan)

Von
RonanF
Erstellungsdatum 2021-01-05 15:56:55 (edited on 2024-09-04 13:32:00) in Serveurs Privés Virtuels (VPS)

Hello all,

I have an OVH VPS running Ubuntu 20.04. I tried to configure my LXD containers to use a public IPV4 failover /32. Unfortunately, I only managed to make it work on LXD containers running Ubuntu 16.04 (using ifupdown for network configuration) but NOT on LXD containers running Ubuntu 20.04 (using netplan).

A bridge device br0 is configured on my VPS with public IP HOST_IP. I have two additional IPs provided by OVH: OVH_FAILOVER_IP1 and OVH_FAILOVER_IP2.
I have configured two LXD profiles, and both containers c1 and c2 use both of them:

- “default”:

config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by:
/1.0/instances/c1
/1.0/instances/c2

- “extbridge”:

config: {}
description: Lets containers use public network interface
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
name: extbridge
used_by:
/1.0/instances/c1
/1.0/instances/c2

This is the output of “lxc network list”:

+--------+----------+---------+-------------+---------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
+--------+----------+---------+-------------+---------+
| br0 | bridge | NO | | 3 |
+--------+----------+---------+-------------+---------+
| ens3 | physical | NO | | 0 |
+--------+----------+---------+-------------+---------+
| lxcbr0 | bridge | NO | | 0 |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge | YES | | 1 |
+--------+----------+---------+-------------+---------+

In container c1, I have the following configuration (inspired by this blog: https://thomas-leister.de/en/lxd-use-public-interface/):

- in /etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto eth0
# iface eth0 inet dhcp
iface eth0 inet static
address OVH_FAILOVER_IP1/32
gateway GATEWAY_IP
dns-nameservers DNS_IP

source /etc/network/interfaces.d/*.cfg

# NOTE: directory /etc/network/interfaces.d/ is empty

- in /etc/resolv.conf:

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver DNS_IP

In container c2, I have the following configuration:

- in /etc/netplan/10-lxc.yaml:

network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
dhcp6: no
addresses:
- OVH_FAILOVER_IP2/32
gateway4: GATEWAY_IP
nameservers:
addresses:
- DNS_IP

- in /etc/resolv.conf:

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
# 127.0.0.53 is the systemd-resolved stub resolver.
# run "systemd-resolve --status" to see details about the actual nameservers.
nameserver DNS_IP
nameserver 127.0.0.53

The output of “lxc list” is:

+--------------+---------+------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+--------------+---------+------------------------+------+-----------+-----------+
| c1 | RUNNING | OVH_FAILOVER_IP1 (eth0)| | CONTAINER | 0 |
+--------------+---------+---------------------+------+-----------+--------------+
| c2 | RUNNING | OVH_FAILOVER_IP2 (eth0)| | CONTAINER | 0 |
+--------------+---------+---------------------+------+-----------+--------------+

Container c1 works correctly: from within the container I can ping the internet (e.g., “ping -c 4 ...”) and I can also netcat the container from the host and from outside (e.g., “netcat -l 80” on the container and “netcat OVH_FAILOVER_IP1 80” on the host or outside).

Container c2 does not work: from within the container I cannot ping the internet (“Network unreachable”) and the container is not accessible from neither the host nor outside.

I am a bit confused: it seems to me that the two configurations are the same (except one uses ifupdown and the other uses netplan, but the result should be the same?).

One thing I have noticed is that in container c1, the command “ip addr” gives something like:

eth0@if45: mtu 1500 qdisc noqueue state UP group default qlen 1000
inet OVH_FAILOVER_IP1/32 brd OVH_FAILOVER_IP1 scope global eth0
...

while in container c2 it gives:

eth0@if47: mtu 1500 qdisc noqueue state UP group default qlen 1000
inet 54.38.226.92/32 scope global eth0
...

I am not sure if broadcasting is relevant here…

Another difference is the output of “networkctl”:

in c1:

WARNING: systemd-networkd is not running, output will be incomplete.
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback n/a unmanaged
44 eth0 ether n/a unmanaged

in c2:

IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
46 eth0 ether routable failed

Do you have any clue of what may cause the difference in behaviour?

I have posted the same issue on the Linux Container Forum: https://discuss.linuxcontainers.org/t/network-unreachable-in-lxd-container-configured-to-use-ovh-failover-ip-with-netplan/9848/5

One contributor suggested that this may be due to OVH enforcing that all external IPs use the same MAC address for VPS (actually it seems not possible to add a virtual MAC address on failover IPs associated to a VPS). Based on the OVH documentation, I think this may be true (enven though I was unable to find it explicitly written). But what is surprising is that one of the container (c1) actually works.

Thanks a lot