Yeah, and you’re pinging from server to client with no client connected. Ping from the client first to open the connection, or set keep alives on the client.
Yeah, and you’re pinging from server to client with no client connected. Ping from the client first to open the connection, or set keep alives on the client.
Your peer have no endpoint configured so the client needs to connect to the server first for it to know where the client is. Try from the client, and it’ll work for a bit both ways.
You’ll want the persistent keepalive option on the client side to keep the tunnel alive.
They should be in /run/systemd
along the rest of generated units.
I think it is a circular problem.
Another example that comes to mind: the sanctions on Huawei and whether Google would be considered to be supplying software because Android is open-source. At the very least any contributions from Huawei is unlikely to be accepted into AOSP. The EU is also becoming problematic with their whole software origin and quality certifications they’re trying to impose.
This leads to exactly what you said: national forks. In Huawei’s case that’s HarmonyOS.
I think we need to get back to being anonymous online, as if you’re anonymous nobody knows where you’re from and your contributions should be based solely on its merit. The legal framework just isn’t set up for an environment like the Internet that severely blurs the lines between borders and no clear “this company is supplying this company in the enemy country”.
Governments can’t control it, and they really hate it.
The problem isn’t even where the software is officially based, it can become a problem for individual contributors too.
PGP for example used to be problematic because US exports control on encryption used to forbid exporting systems capable of strong encryption because the US wanted to be able to break it when it’s used by others. Sending the tarball of the PGP software by an american to the soviets at the time would have been considered treason against the US, let alone letting them contribute to it. Heck, sharing 3D printable gun models with a foreign country can probably be considered supplying weapons like they’re real guns. So even if Linux was based in a more neutral country not subject to US sanctions, the sanctions would make it illegal to use or contribute to it anyway.
As much as we’d love to believe in the FOSS utopia that transcends nationality, the reality is we all live in real countries with laws that restrict what we can do. Ultimately the Linux maintainers had to do what’s best for the majority of the community, which mostly lives in NATO countries honoring the sanctions against Russia and China.
The sandboxing is almost always better because it’s an extra layer.
Even if you gain root inside the container, you’re not necessarily even root on the host. So you have to exploit some software that has a known vulnerable library, trigger that in that single application that uses this particular library version, root or escape the container, and then root the host too.
The most likely outcome is it messes up your home folder and anything your user have access to, but more likely less.
Also, something with a known vulnerability doesn’t mean it’s triggerable. If you use say, a zip library and only use it to decompress your own assets, then it doesn’t matter what bugs it has, it will only ever decompress that one known good zip file. It’s only a problem if untrusted files gets involved that you can trick the user in causing them to be opened and trigger the exploit.
It’s not ideal to have outdated dependencies, but the sandboxing helps a lot, and the fact only a few apps have known vulnerable libraries further reduces the attack surface. You start having to chain a lot of exploits to do anything meaningful, and at that point you target those kind of efforts to bigger more valuable targets.
The problem with a different spoof for each domain is that this behavior on its own can be used as a fingerprint based on timestamp and IP in access logs.
Hiding among the crowd is probably better, especially since newer versions of Chrome all report the same UA you blend in even more.
No, if you deleted the btrfs driver it would simply fail to mount due to the missing driver, if it’s a separate module in the first place. Same with LUKS, if you don’t have the tools or the drivers installed for it, it’ll just not mount it. You’d have to be accessing the drive directly with something like dd
to corrupt it.
My point was really that data can’t be that exensive even with including transit fees like Cogent and Level3, because I can use TBs of bandwidth every month and OVH doesn’t even bother measuring it.
If my home ISP gives me a gigabit link, yes I pay for all the cabling and equipment to carry that traffic. But that’s it, I already pay for infrastructure capable of providing me with gigabit connectivity. So why is it that they also want me to pay per the GB?
In Europe they can provide gigabit connectivity for dirt cheap with no caps, they don’t even bother with tiered speed plans there, how come my $120+/mo Internet in the US isn’t sufficient to cover the bandwidth costs? It’s ridiculous, even StarLink doesn’t have data caps.
But somehow communities with crappy DSL that can barely do 10 Mbps still have ridiculously low data caps. It’s somehow not a problem for most ISPs in the world, except US ISPs, the supposedly richest and most advanced country in the world.
Yeah sure, then why is it that my entire bare metal server leased from OVH costs less than my Internet connection, and is fully unmetered access too.
I pay for a data rate and I should be able to use the full amount as I please. If we paid for the amount of data then why are we advertising speeds and paying for speeds?
The error says /home
is a symlink, what if you ls -l /home
?
Since this is an atomic distro, /home
might be a symlink to /var/home
.
Docker, Distrobox, Toybox, systemd-nspawn, chroot.
Technically those all rely on the same kernel namespace features, just different ways to use it.
That’s also what Flatpaks and Snaps do. If you only care about package bloat, an AppImage would do too but it’s not a sandbox like Flatpak.
auto rollbacks and easy switching between states.
That’s the beauty of snapshots, you can boot them. So you just need GRUB to generate the correct menu and you can boot any arbitrary version of your system. On the ZFS side of things there’s zfsbootmenu, but I’m pretty sure I’ve seen it for btrfs too. You don’t even need rsync, you can use ssh $server btrfs send | btrfs recv
and it should in theory be faster too (btrfs knows if you only modified one block of a big file).
and the current r/w system as the part that gets updated.
That kind of goes against the immutable thing. What I’d do is make a script that mounts a fork of the current snapshot readwrite into a temporary directory, chroot into it, install packages, exit chroot, unmount and then commit those changes as a snapshot. That’s the closest I can think of that’s easy to DIY that’s basically what rpm-ostree install
does. It does it differently (daemon that manages hardlinks), but filesystem snapshots basically do the same thing without the extra work.
However, I think it would be good to use OStree
I found this, maybe it’ll help: https://ostreedev.github.io/ostree/adapting-existing/
It looks like the fundamental is the same, temporary directory you run the package manager into and then you commit the changes. So you can probably make it work with Debian if you want to spend the time.
All you really have to do for that is mount the partition readonly, and have a designated writable data partition for the rest. That can be as simple as setting it ro
in your fstab.
How you ship updates can take many forms. If you don’t need your distro atomic, you can temporarily remount readwrite, rsync the new version over and make it readonly again. If you want it atomic, there’s the classic A/B scheme (Android, SteamOS), where you just download the image to the inactive partition and then just switch over when it’s ready to boot into. You can also do btrfs/ZFS snapshots, where the current system is forked off a snapshot. On your builder you just make your changes, then take a snapshot, then zfs/btrfs send
it as a snapshot to all your other machines and you just boot off that new snapshot (readonly). It’s really not that magic: even Docker, if you dig deep enough, it’s just essentially tarballs being downloaded then extracted each in their own folder, and the layering actually comes from stacking them with overlayfs. What rpm-ostree does, from a quick glance at the docs, is they leverage the immutability and just build a new version of the filesystem using hardlinks and you just switch root to it. If you’ve ever opened an rpm or deb file, it’s just a regular tarball and the contents pretty much maps directly to the filesytem.
Here’s an Arch package example, but rpm/deb are about the same:
max-p@desktop /v/c/p/aur> tar -tvf zfs-utils-2.2.6-3-x86_64.pkg.tar.zst
-rw-r--r-- root/root 114771 2024-10-13 01:43 .BUILDINFO
drwxr-xr-x root/root 0 2024-10-13 01:43 etc/
drwxr-xr-x root/root 0 2024-10-13 01:43 etc/bash_completion.d/
-rw-r--r-- root/root 15136 2024-10-13 01:43 etc/bash_completion.d/zfs
-rw-r--r-- root/root 15136 2024-10-13 01:43 etc/bash_completion.d/zpool
drwxr-xr-x root/root 0 2024-10-13 01:43 etc/default/
-rw-r--r-- root/root 4392 2024-10-13 01:43 etc/default/zfs
drwxr-xr-x root/root 0 2024-10-13 01:43 etc/zfs/
-rw-r--r-- root/root 165 2024-10-13 01:43 etc/zfs/vdev_id.conf.alias.example
-rw-r--r-- root/root 166 2024-10-13 01:43 etc/zfs/vdev_id.conf.multipath.example
-rw-r--r-- root/root 616 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_direct.example
-rw-r--r-- root/root 152 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_switch.example
-rw-r--r-- root/root 254 2024-10-13 01:43 etc/zfs/vdev_id.conf.scsi.example
drwxr-xr-x root/root 0 2024-10-13 01:43 etc/zfs/zed.d/
...
It’s beautifully simple. You could for example install ArchLinux without pacman
, by mostly just tar -x
the individual package files directly to /
. All the package manager does is track which file is owned by which package (so it’s easier to remove), and dependency solving so it knows to go pull more stuff or it won’t work, and mirror/download management.
How you get that set up is all up to you. Packer+Ansible can make you disk images and you can maybe just throw them on a web server and download them and dd
them to the inactive partition of an A/B scheme, and that’d be quite distro-agnostic too. You could build the image as a Docker container and export it as a tarball. You can build a chroot. Or a systemd-nspawn instance. You can also just install a VM yourself and set it up to your liking and then just dd
the disk image to your computers.
If you want some information on how SteamOS does it, https://iliana.fyi/blog/build-your-own-steamos-updates/
More information about storing electrons and light and other information like with most likely aliens abducting and exploiting people as a resource in a text document called “Information about totalitarian and manipulative aliens.odt”, also with picture in the post perhaps also prove these aliens are real:
That’s more like cocaine and meth levels than Adderall at this point
Why does the government keep trying to regular fake Internet money? The whole point of it was that it was a free for all. Who the fuck cares if crypto bros get fucked, if you want real securities you go to a real bank and open a real investment account.
I’m talking about the new one they made from scratch in Rust: https://system76.com/cosmic
The data set is paywalled so it’s hard to know. If they picked shovelware most people would rather pirate then yeah, they could reach that conclusion easily.
Denuvo could also be just making people forget about the game once the hype dies down so they never end up trying it which ends up never buying it.
Some people also end up buying the game in sale later, or well after they played it. I personally ended up buying a lot of the games I pirating a while back, well after their release.
Pop_OS! is about to drop a whole new desktop environment (COSMIC) made from scratch that’s not just a fork of Gnome. Canonical tried that as well a while back with Unity although it was mostly still Gnome with extra Compiz plugins.
A lot of cool stuff is also either for enterprise uses, or generally under the hood stuff. Simple packages updates can mean someone’s GPU is finally usable. Even that LibreOffice update might mean someone’s annoying bug is finally fixed.
But yes otherwise distros are mostly there to bundle up and configure the software for you. It’s really just a bunch of software, you can get the exact same experience making your own with LFS. Distros also make some choices like what are the best versions to bundle up as a release, what software and features they’re gonna use. Distros make choices for you like glibc/musl, will it use PulseAudio or PipeWire, and so on. Some distros like Bazzite are all about a specific use case (gamers), and all they do is ship all the latest tweaks and patches so all the handhelds behave correctly and just run the damn games out of the box. You can use regular Fedora but they just have it all good to go for you out of the box. That’s valuable to some people.
Sometimes not much is going on in open-source so it just makes for boring releases. Also means likely more focus on bug fixes and stability.
It’s nicknamed the autohell tools for a reason.
It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.
Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.
At least it’s not node_modules