

Thats just how IPv6 works. You get a delegate address from your ISP for your router and then any device within that gets it own unique address. Considering how large the pool is, all address are unique. No NAT means no port forwarding needed!
Thats just how IPv6 works. You get a delegate address from your ISP for your router and then any device within that gets it own unique address. Considering how large the pool is, all address are unique. No NAT means no port forwarding needed!
Right? My flake is pretty complex at this point. I use it for over 6 computers, my storage server, compute servers, VPS etc etc. Been perfectly stable for over 3 years. I update with the release cycle every 6 months. Never needed more than a small change here or there and it usually warns me of the depreciations ahead of time.
Thankfully I’ve only needed to roll back twice and it was perfect. Lost no data and kept working while I waited for a fix. If my flake ever blows up completely I’ll switch… but I dobt that will happen lol
The rules still apply to the host, just not inside the container. Docker is just ignoring the rules. If you block all ports but then have port 81 open like you do in that section of docker compose, you would think that UFW would block docker but thats not the case. Going to http://yourip:81/ will show then NPM gui, even if you specifically use ufw to block 81. If you only expose port 80 and 443, you should be fine. Your NPM container would have to be compromised then they would have to break out of the container.
Also I think your issue is with your DNS. You should have an A record for the IP pointing to example.com and then a CNAME record pointing to sub.example.com
Docker completely ignores UFW rules. If you check your ip tables you’ll see docker rules are put in before UFW. For the 504 though, it sounds like traffic is not getting to NPM. Have you routed ports 80 and 443 to the docker container?
I use headscale on a VPS as an ingress point into my network and I love it. On top of headscale, I use two instances of traefik to make my network. I have one instance of traefik running on the vps which runs a couple of services that I want running 24/7(headscale-ui is nice). It pulls a subdomain certificate for TLS. So any services under say *.vps.example.com get routed to the VPS.
Then I have a wildcard TCP rule pointing the rest of the network traffic to my home server through headscale. My home server is running another instance of traefik where all my services are running. This pulls another wildcard cert for the rest of the *.example.com subdomains.
Cool thing about this setup is I can now have my DNS server rewrite *.example.com to my servers LAN IP. Now when my device is home, it works even when WAN is out. But when I’m out and about, it hits the public DNS and goes through my VPS. With traefik I can write a not !ClientIP rule and essentially block the VPS. Now I can host a service at home but also block it from being accessed from the public. But if I need access to the LAN remotely, I can just use a tailsacale client and get into headscale and see everything.
Its an odd network, but it’s super flexible and works very well for my use case. If you have any questions I’d love to help you set something like this up :D
The over lap of docker containers needs to happen from inside the perspective of the container. If you send Radarr to pull a movie from bittorrent, they both need to “be in the same spot”. If bittorrent thinks it’s saving a movie to /data/torrent then Radarr also needs to see the movie at /data/torrent.
That’s why so many guides use the /data/ label scheme. Its just easy to use and implement. Side note, for hard links to work, all the folders need to be on the same drive. Can’t hard link between different drives.
Ah sorry to hear that. Did you find something better that works for you? I’m open to suggestions :D
I followed along the nixos wiki for kubernetes and creating the “master” kublet is super easy when you set easyCerts = true. Problem is, it spits out files to /var/lib/kubernetes/secrets/ that is owned by root. Specifically, the cluster-admin.pem file. If I want to push commands to the cluster using kubectl I have to elevate to a root shell. I could just chmod or chown the file but that seems like a security risk.
Now I’m not familiar with k8s at all. This is my first go through, so I could be doing something wrong or missing a step. I saw something about the role based security but I haven’t jumped down that rabbit hole yet. Any tips for running kubectl without root?
I’m working on my first kubernetes cluster. I’m trying to set the systems up with NixOS. I can get a kublet and a control plane running. But I’m getting permission errors when trying to use kubectl rootless on the system running the control plane. I think I figured out which file i need to change, now I just want to record that change in my configuration.nix.
I’ve got a galaxy fold 4 that I pre-ordered, so over 2 years old. I’ve rocked it this whole time with no case and dropped it plenty of times. It’s got a couple of scratches on the hinge but has been the most solid phone I’ve ever owned. From my experience, your claims are simply not true.
Ha, ya know? I think I know some people who will just regurgitate whatever input they receive
…
:(
Mullvad let’s you write down an account number on a piece of paper and mail it in with cash and they’ll activate it.
What do you mean? If it makes you feel any better, the Earth will be fine. Has been for a couple billion years. We did this to ourselves :(
Don’t, you can still install nix into Garuda. Works great as a separate package manager that won’t get in the way.
I think the problem is that most people dive right in and go to NixOS which has its quirks as a linux OS (see FHS). The Nix language is great at building and moving source code between computers, really any big collection of binaries. If you don’t do that, try just using the nix-shell command to instantly run a piece of software without installing it. You can write a shell.nix file to hop into and out of an environment with whatever software you need. Once you can write a couple .nix files then move onto NixOS; which after all is just a big collection of binaries.
The name is silly but the Galaxy XCover 6 pro checks all those boxes as a new phone. It even has the old style notification light, different colors for notifications.
Servarr is a stack of applications that sets up a media suite. Radarr and Sonarr handle the managing of movies and TV shows, respectively. Prowlarr searches for the media through either Torrenting or Usenet. Then you’d need a downloader like SABnzbd or Deluge. Ombi is another application to handle requests and finally you’d need a streaming app like Plex, Emby or Jellyfin.
Think of it like a marionette; you’re making a bunch of services work together for one goal. Most people use docker and create a docker compose file to manage all the services. Typically the flow goes like this, a person makes a request to Ombi for something to watch. That request goes to Radarr or Sonarr, which creates a folder and populates the Metadata from IMDB. Then a request is sent to Prowlarr to find the media. Once found its sent to the downloader, like Deluge, to actually grab the media. After it’s done, Radarr / Sonarr will import the media into the correct folder. Now you’ve got a perfect collection for Plex / Emby / Jellyfish to start streaming your media. Really awesome suite once you get it up and running.
I wish I had setup an identity management system sooner. Been self-hosting for years and about a year ago took the full plunge into setting up all my services behind Authentik. Its a game changer not having to deal with all the usernames and passwords.
In a similar vein, before Authentik, I used Vaultwarden to manage all my credentials. That was also a huge game changer with my significant other. Being able to have them setup their own account and then share credentials as an organization is super handy.