Docker networking help (vlans)
from zo0@programming.dev to selfhosted@lemmy.world on 05 Apr 02:06
https://programming.dev/post/48319093

Hi folks, hope your weekend is going well.

So I have put myself into a situation. I have a home server with docker installed running fine so far. In my home network I have multiple networks for different purposes. The whole network stack looks like this OPNSense — Switch — Ubuntu Server

The server is connected to a switch port with pvid 100, and runs on vlan0.100 Now my goal is to move some docker containers to other vlans. To accomplish that I have set vlan0.101 and vlan0.102 on my server as interfaces with their own IP and default gateway on that subnet (e.g. 192.168.101.10) Next step I set up macvlans for my docker containers Then I set the port to also allow tagged traffic, but kept it on pvid 100. Now on my OPNSense I changed the host ip of my server from 192.168.100.10 to include all 3 IPs so homeserver 192.168.100.10, 192.168.101.10, 192.168.102.10

This setup seems to work fine for internal network, however no services are reachable from the outside (internet) anymore.

My first question is: Am I thinking correctly about this? Or is this over-engineered bs at this point and there is a better way to put docker containers on different subnets.

Second question is: Any ideas what’s breaking the internet access?

Thanks for the help in advance :D

EDIT: i have not changed the vlan of any container yet

#selfhosted

threaded - newest

probable_possum@leminal.space on 05 Apr 03:49 next collapse

I don’t think I know the reason for the issue you’ve described. I don’t have enough information for that.

First thing would be: Is the routing and firewalling OK? Later: DNS. Even later: services reachable?

The Opnsense instance has configured multiple VLANs and zones too? With one server interface in each? The packets between the vlans take a path via the router?

I tried to give my server multiple interfaces on different VLANs once, but ran into problems with that approach. I then added one bridge interface per VLAN to the server and gave it just one IP on one vlan. That way the server isn’t tempted to route things itself or deliver packets on a wrong interface. An entire class of possible errors was removed that way. Docker containers and VMs still can have IPs in their respective VLANs/ nets.

It is worth noting that docker firewalling and ufw don’t play well together, which could be the reason for unreachable services. Moving the docker host into a LXC abstracts the issue away. Incus can run OCI containers itself and may be an alternative to docker (but not docker compose).

I can’t say anything about over-engeering. It is a hobby after all and you decide what is important and how much complexity you need. :)

zo0@programming.dev on 05 Apr 04:44 next collapse

Answer to your first question, he dockers successfully resolve and access internet

  1. Yes the OPNSense is the primary router and dhcp provider so all the subnets and vlans are defined and working with physical devices

I actually was having the same issue with the routing of server. How did you setup your bridge exactly? Do you mind sharing your netplan?

surewhynotlem@lemmy.world on 05 Apr 05:49 next collapse

Bridge? There’s your problem I think. Bridge doesn’t allow ingress to individual IPs. In bridge, you tell each container what port it listens on, then access it from the IP of the host.

User defined bridges act differently from the default one as well. May not be relevant to your issue, but docs.docker.com/engine/network/drivers/bridge/#di…

zo0@programming.dev on 05 Apr 06:16 collapse

I mean at the moment I don’t have any bridges setup (other than the dockers own bridge) I thought maybe I could solve my issue with bridging

surewhynotlem@lemmy.world on 05 Apr 09:51 collapse

Oh, hmm. How are you telling which service to be on which IP then? Could you safely post your compose file?

zo0@programming.dev on 06 Apr 00:47 collapse

I will post it when I get my hands on it, but basically I made a macvlan which is using the server vlan, and then in the compose I set the network to that macvlan, which seems to be functional at least

probable_possum@leminal.space on 05 Apr 08:41 collapse

Netplan config? Sure:

network:
  ethernets:
    enp35s0:
      dhcp4: false
    enp36s0:
      dhcp4: false
  vlans:
    enp35s0.100:
      id: 100
      link: enp35s0
      dhcp4: false
    enp35s0.101:
      id: 101
      link: enp35s0
      dhcp4: false
  bridges:
     br0:
	   # untagged
       interfaces: [enp35s0]
       dhcp4: false
     br0.100:
	   # vlan 100
       interfaces: [enp35s0.100]
       dhcp4: false
     br0.101:
	   #vlan 101
       interfaces: [enp35s0.101]
       dhcp4: true
  version: 2

I’m not sure if the version-property is still required. The only interface with an IP is br0.101. Opnsense provides DHCP (v4).

You can attach multiple ethernet-devices to a bridge (which I did not):

      br0.100:
        interfaces:
          - enp35s0.100
          - two
	        - three

I’m not sure if you can attach the docker bridge via netplan - it has to exist at boot time, I think. My docker containers run inside a VM (kvm) with one interface, which sits in one of the VLANs. The VM’s interface is a bridge device (br0.100). The VM ethernet device is attached to the bridge, it receives its IP from the router and behaves like a real server.

zo0@programming.dev on 06 Apr 00:44 collapse

Thanks for sharing this, I’ll give it a try and see how it goes

irmadlad@lemmy.world on 05 Apr 09:14 collapse

It is worth noting that docker firewalling and ufw don’t play well together

This. It took me a little fiddling to get it right

frongt@lemmy.zip on 05 Apr 05:35 next collapse

If LAN works but WAN fails, it’s probably a gateway or routing issue. Does your router know it’s the gateway for those subnets? Do the clients have the gateway configured? Are there routes for packets to find their way out, and back in from the gateway to the client?

zo0@programming.dev on 05 Apr 06:14 collapse

That is my gut feeling too, but as I mentioned in another comment all physical devices work fine in their respective subnet. This is happening before I move the containers to a new subnet, and before these changes everything was working fine.

Decronym@lemmy.decronym.xyz on 05 Apr 06:20 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
DNS Domain Name Service/System
IP Internet Protocol
LXC Linux Containers

4 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #213 for this comm, first seen 5th Apr 2026, 13:20] [FAQ] [Full list] [Contact] [Source code]

buedi@feddit.org on 06 Apr 00:06 collapse

I can’t see your full setup / config from here, but a) you are not overengineering that. Using VLANs to segment networks is a very good practice. And although Docker (nor Podman) allow macvlan when running rootless, my gutfeeling tells me that segmenting my network takes priority over running rootless, because I think that attack vectors by traversing networks are much more common that breaking out of a container into the host. But this is just my gutfeeling. b) I think I run here what you want to achieve, so I try to explain what I did.

My Setup is similar to yours. OPNsense (OpenWRT before that), a Switch that is capable of VLAN and a Ubuntu Server with a single NIC that hosts all the Compose stacks.

  1. You already configured your VLANs in OPNsense, so I will just mention that I created mine via Interface -> Devices -> VLAN on the LAN Interface of my OPNsense and then used the Assignments to finally make them available. On the OPNSense each one gets a static IP from the respective Network I defined for the VLAN.
  2. On the Docker Host, in Netplan I configured the single NIC I have as a Bridge. I cannot remember if that was necessary or if I just planned ahead, should I add a 2nd NIC later on, to prevent that I need to reconfigure the whole networking again. Of course that Bridge sits in my LAN and the Netplan Config looks like this:
network:
  ethernets:
    eno1:
      dhcp4: no
  version: 2
  bridges:
    br0:
      addresses:
      - 192.x.x.3/24
      nameservers:
        addresses:
        - 192.x.x.x
        search:
        - my.lan
        - local
      routes:
      - to: default
        via: 192.x.x.1
      interfaces:
        - eno1
  1. Now that the Docker Containers can use the VLANs, I had to create Docker Networks as macvlan like this:
docker network create -d macvlan --subnet=192.x.10.0/24 --gateway=192.x.10.1 -o parent=br0.10 vlan10
docker network create -d macvlan --subnet=192.x.20.0/24 --gateway=192.x.20.1 -o parent=br0.20 vlan20
  1. Now for a Container to make use of those Networks, you have to define them as External in the Compose Stack like this:
services:
  my-service:
    image: blah
    ...
    
zo0@programming.dev on 06 Apr 01:03 collapse

Thanks, that’s a great write up. One thing I didn’t ubderstand however is in your Docker macvlan, you set the parent to br0.10 and br0.20, where are those parents defined?

Maybe I misunderstood the macvlan documents but what I did was defining a vlan in server netplan vlan0.100 and set the macvlan parent to that vlan0.100. Is that not how it’s supposed to be?

buedi@feddit.org on 06 Apr 01:24 collapse

The .10 or .20 just advises Docker to create that specific Subinterface automatically. In my example ip link will show new interfaces called br0.10 and br0.20 after creating the macvlan networks for VLAN IDs 10 and 20. You do not need to adjust your Netplan config when doing it like that. I would even assume that you are not allowed to define VLAN ID 10 and 20 in that particular case also in Netplan. I would expect that this will cause issues. Also see docs.docker.com/engine/network/drivers/macvlan/ in the 802.1Q trunk bridge mode section.

There are probably multiple ways to do all of this, but this is how I did it and it works for me since a few years without touching it again. All VLANs are separated from each other and no VLAN has access to the LAN side. Everything is forced to go through tagged VLANs via the switch to the Firewall, where I then create rules to allow / deny traffic from / to all my networks and the Internet.

For me, this setup is very simple to re-implement should my Host go down. No special configuration in Netplan is needed. Only create the Docker Networks and start up my stacks again.