• 2 Posts
  • 19 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2024

help-circle
  • The .10 or .20 just advises Docker to create that specific Subinterface automatically. In my example ip link will show new interfaces called br0.10 and br0.20 after creating the macvlan networks for VLAN IDs 10 and 20. You do not need to adjust your Netplan config when doing it like that. I would even assume that you are not allowed to define VLAN ID 10 and 20 in that particular case also in Netplan. I would expect that this will cause issues. Also see https://docs.docker.com/engine/network/drivers/macvlan/ in the 802.1Q trunk bridge mode section.

    There are probably multiple ways to do all of this, but this is how I did it and it works for me since a few years without touching it again. All VLANs are separated from each other and no VLAN has access to the LAN side. Everything is forced to go through tagged VLANs via the switch to the Firewall, where I then create rules to allow / deny traffic from / to all my networks and the Internet.

    For me, this setup is very simple to re-implement should my Host go down. No special configuration in Netplan is needed. Only create the Docker Networks and start up my stacks again.


  • I can’t see your full setup / config from here, but a) you are not overengineering that. Using VLANs to segment networks is a very good practice. And although Docker (nor Podman) allow macvlan when running rootless, my gutfeeling tells me that segmenting my network takes priority over running rootless, because I think that attack vectors by traversing networks are much more common that breaking out of a container into the host. But this is just my gutfeeling. b) I think I run here what you want to achieve, so I try to explain what I did.

    My Setup is similar to yours. OPNsense (OpenWRT before that), a Switch that is capable of VLAN and a Ubuntu Server with a single NIC that hosts all the Compose stacks.

    1. You already configured your VLANs in OPNsense, so I will just mention that I created mine via Interface -> Devices -> VLAN on the LAN Interface of my OPNsense and then used the Assignments to finally make them available. On the OPNSense each one gets a static IP from the respective Network I defined for the VLAN.
    2. On the Docker Host, in Netplan I configured the single NIC I have as a Bridge. I cannot remember if that was necessary or if I just planned ahead, should I add a 2nd NIC later on, to prevent that I need to reconfigure the whole networking again. Of course that Bridge sits in my LAN and the Netplan Config looks like this:
    network:
      ethernets:
        eno1:
          dhcp4: no
      version: 2
      bridges:
        br0:
          addresses:
          - 192.x.x.3/24
          nameservers:
            addresses:
            - 192.x.x.x
            search:
            - my.lan
            - local
          routes:
          - to: default
            via: 192.x.x.1
          interfaces:
            - eno1
    
    1. Now that the Docker Containers can use the VLANs, I had to create Docker Networks as macvlan like this:
    docker network create -d macvlan --subnet=192.x.10.0/24 --gateway=192.x.10.1 -o parent=br0.10 vlan10
    docker network create -d macvlan --subnet=192.x.20.0/24 --gateway=192.x.20.1 -o parent=br0.20 vlan20
    
    1. Now for a Container to make use of those Networks, you have to define them as External in the Compose Stack like this:
    services:
      my-service:
        image: blah
        ...
        networks:
          vlan10:
    
    networks:
      vlan10:
        name: vlan10
        external: true
    

    In 4. you have the option to not define an ipv4_address in the networks section. Then Docker will just pick its own addresses when the containers start. Letting OPNsense assign IP addresses dynamically in such a VLAN is something that did not work for me. So either you let Docker pick the IPs when starting a stack, or you define your IP addresses in the stack. If you do the latter, you have to do it for every stack that ever joins that VLAN, otherwise Docker might pick an IP that you already assigned manually and that stack will not start.

    I also wanted to have some services running directly in the LAN via Docker. This setup is a bit more involved and requires you to create a SHIM Network, otherwise the Docker Host itself will not be capable of accessing Containers running in the LAN Network. This was the case for my Pi-Hole for example, that I wanted to have an IP in my LAN Network and had to be reachable by the Docker Host itself too. There is a very good post about Macvlan and SHIM Networks in this blog: https://blog.oddbit.com/post/2018-03-12-using-docker-macvlan-networks/

    I hope this helps. Do not give up. Segmenting your Networks is important, especially if you plan to publish some services over the Internet.



  • buedi@feddit.orgtoSelfhosted@lemmy.worldSelf hosting Signal server
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    Thanks for pointing out Simplex Chat, I did not know that it exists. It looks very interesting, but reading more about it, they will have to implement some kind of business model in the future. My fear is, that even when self-hosting, some features will be behind a paywall in the future, so it is not a solution I would switch to… switching to a new messenger is a long-term endeavour. It is hard to convince friends to move over too, let alone switching to a new one every few years. That’s near impossible. But the technology of Simplex looks really interesting and reading through the Docs it makes the impression that it is very polished.




  • Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.


  • I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and “attach” to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

    This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.





  • That’s a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

    Edit: I did a ‘virsh dumpxml <vmname>’ and the Disk Part looks like this:

      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none'/>
          <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
          <backingStore/>
          <target dev='sda' bus='sata'/>
          <serial>8d68ee83940d4b688b28</serial>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    

    It is SATA… now I need to figure out how to change that configuration ;-)







  • I spent half a day trying to get acme-dns + Cert Warden up and running and failed miserably. And I think I will give up on it. That does not happen usually, but during my debugging sessions I have seen that the acme-dns project is not maintained regularly since quite a while. The current maintainer just has not enough time, but tries to prepare the project for a move to a new GitHub organization, so more people can help with the project. Until then, Issues and PRs accumulate, so I am not sure anymore if I should stick to acme-dns or just do it differently.

    Why did I pick this scenario? Because of Let’s Encrypt certificates and my DNS provider does not allow fine-grained API Keys for DNS management. This means, that currently the processes that request certificates in my Network need the API Key for the dns-challenge for Let’s Encrypt.

    Ways around that are by either using Let’s Encrypt alternate (I think it is called DNS alias mode) method where you can request Certificates for your main domain, but put the TXT records for the DNS challenge on another Domain. One way is to just use a 2nd Domain for that if you have one.

    I tried to do it with a Subdomain of my Main Domain that I delegate to acme-dns. The whole acme-dns, Domain delegation stuff etc. works fine, but I am not able to get this hooked up to Cert Warden properly and end up with error messages that make no sense to me and since I do not find any further information in the logs, as I said, I just gave up yesterday evening… for now ;-)

    Another thing I am struggling sometimes is my Pi-Hole + Unbound setup where Unbound for no reason just returns a NXDOMAIN for some queries and I can not figure out why, under which circumstances and when that happens. It just seems to be random and a restart / cache clearing etc. does not fix it.


  • PostgreSQL Updates AFAIK require manual Backup / Restore of the Database. But better look that up. I think the last one I did was:

    1. Stop the Application Containers (here the Immich ones, so only PostgreSQL runs)
    2. Backup the Database
    3. Stop the PostgreSQL Container
    4. Change to the new PostgreSQL Version
    5. Start the PostgreSQL Container
    6. Restore the Database
    7. Start the Application Containers

    As I said, better look it up first, this is just how I remember the process (but not the backup / restore commands).