IPv6 Gateway outside network (OVH)

Lenne asked:

We a dedicated server at OVH, assigned 2001:41d0:a:72xx::/64

I have set the machines on a segment bridged to the wan, as per
IPv6 public routing of virtual machines from host

The gateway is 2001:41d0:a:72ff:ff:ff:ff:ff, outside the network

We’re running a bunch of virtual debian servers.

Some of our (older) servers are happy to route ipv6 to the gateway, but the new I’m trying to setup are saying .”Destination unreachable; Address Unreachable” when pinging the gw.

Firewall is setup equally (rules for /64, not on host-level), and /etc/network/interfaces are equal; ipv6 are set static. (different adresses of cause)

On both working and non-working machine, netstat -rn6|grep eth1 show

2001:41d0:a:72xx::/64          ::                         U    256 2    40 eth1
2001:41d0:a:7200::/56          ::                         UAe  256 2    71 eth1
2001:41d0:a1:72xx::/64         ::                         UAe  256 0     0 eth1
2000::/3                       2001:41d0:a:72ff:ff:ff:ff:ff UG   1024 2 63479 eth1
fe80::/64                      ::                         U    256 0     0 eth1
::/0                           fe80::205:73ff:fea0:1      UGDAe 1024 1     2 eth1
::/0                           fe80::20c:29ff:fe22:60f8   UGDAe 1024 0     0 eth1
ff00::/8                       ::                         U    256 2108951 eth1

On the non-working machines, pinging the gw or the workd returns “Destination unreachable.”

The machines can all reach each other on the local lan.

I don’t know if it is relevant, but

ping -c3 ff02::2%eth1
64 bytes from fe80::20c:29ff:fedb:a137%eth1: icmp_seq=1 ttl=64 time=0.240 ms
64 bytes from fe80::20c:29ff:fe22:60f8%eth1: icmp_seq=1 ttl=64 time=0.250 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffd%eth1: icmp_seq=1 ttl=64 time=3.57 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffe%eth1: icmp_seq=1 ttl=64 time=5.97 ms (DUP!)

On the non-working

ping -c3 ff02::2%ens34
PING ff02::2%ens34(ff02::2%ens34) 56 data bytes
64 bytes from fe80::20c:29ff:fedb:a137%ens34: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from fe80::20c:29ff:fe22:60f8%ens34: icmp_seq=1 ttl=64 time=0.138 ms (DUP!)

The :fffd amd :fffe addresses missing.

All the ipv6-addresses have been assigned at OVH control panel.

TL;DR: Something must be different between the old and new servers, but I can’t find it.

UPDATE: A clone of a working machine does not work.

On the outside of the pfsense, set up as bridge, the machine sends this:


12:33:23.087778 IP6 test1.example.org > fe80::2ff:ffff:feff:fffe: ICMP6, neighbor advertisement, tgt is test1.example.org, length 32
12:33:24.106302 IP6 test1.example.org > par10s28-in-x0e.1e100.net: ICMP6, echo request, seq 451, length 64

But nothing ever gets back. Pings from outside doesn’t go through either.

As the machine is an exact clone of a working machine, except for the ip-addresses, it must be an upstream problem at OVH.

UPDATE 2
Now OVH claims that to get data routed to an IPv6, the mac need to be associated to an IPv4 address. OMG The working IPv6’s are not.

My answer:


OVH runs switch port security on their switches, so that only whitelisted MAC addresses can use any given port. This doesn’t apply to vRack; switch port security is disabled on vRack. But OVH won’t let you route IPv6 subnets to vRack yet. Nor can you failover an IPv6 subnet to another server. This is a critical oversight; until both of these capabilities exist, OVH’s IPv6 support is considered limited.

So this is how I’ve set up an OVH server running a few dozen virtual machines:

On the host server, br3 is a bridge containing eno3 and virtual network interfaces on which I route IPv6. The host is configured as:

# cat /etc/sysconfig/network-scripts/ifcfg-br3
DEVICE="br3"
TYPE="Bridge"
STP="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_FAILURE_FATAL="no"
NAME="br3"
ONBOOT="yes"
ZONE="public"
BOOTPROTO="static"
IPADDR="203.0.113.24"
PREFIX="24"
GATEWAY="203.0.113.1"
IPV6_AUTOCONF="no"
IPV6ADDR="2001:db8:1f3:c187::/64"

I have static routes configured as such:

# cat /etc/sysconfig/network-scripts/route6-br3 
2001:db8:1f3:c187::/64 dev br3
2001:db8:1f3:c1ff:ff:ff:ff:ff dev br3
default via 2001:db8:1f3:c1ff:ff:ff:ff:ff dev br3

I then run ndppd, which answers NDP neighbor solicitation queries for any address in my /64. It’s configured as such:

# cat /etc/ndppd.conf 
route-ttl 30000
proxy br3 {
   router yes
   timeout 500   
   ttl 30000
   rule 2001:db8:1f3:c187::/64 {
      static
   }
}

This causes the MAC address of the host to be used for all IPv6 addresses in the subnet, which I then route to virtual interfaces in libvirt, split into /80 networks. One example is configured as such:

# cat /etc/libvirt/qemu/networks/v6bridge_1.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit v6bridge_1
or other application using the libvirt API.
-->

<network>
  <name>v6bridge_1</name>
  <uuid>7007a2b2-08b8-4cd5-a4aa-49654ae0829b</uuid>
  <forward mode='route'/>
  <bridge name='v6bridge_1' stp='on' delay='0'/>
  <mac address='52:54:00:fc:d4:da'/>
  <ip family='ipv6' address='2001:db8:1f3:c187:1::' prefix='80'>
  </ip>
</network>

All VMs in this particular network are assigned manual IPv6 addresses, but you could set up DHCPv6 if you wanted. That would look like:

  <ip family='ipv6' address='2001:db8:1f3:c187:1::' prefix='80'>
    <dhcp>
      <range start="2001:db8:1f3:c187:1::100" end="2001:db8:1f3:c187:1::1ff"/>
    </dhcp>
  </ip>

I then route IPv4 failover addresses to the vRack, which is bridged to a single bridge br4 on eno4 that all my VMs get a second virtual NIC from. Thus they have IPv6 on one interface and IPv4 on another. This is optional; you could just keep IPv4 failover addresses on your main interface (if you don’t have a vRack, for instance).


View the full question and any other answers on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.