For example, I’m using Debian, and I think we could learn a thing or two from Mint about how to make it “friendlier” for new users. I often see Mint recommended to new users, but rarely Debian, which has a goal to be “the universal operating system”.
I also think we could learn website design from… looks at notes …everyone else.
Fedora Atomic Desktop, mainly KDE.
Also, their traditional KDE variant is very bloated, which is why I updated this guide
But overall its still my favourite distro. Has a nice community, all the desktops you want, SELinux (which is btw required to make Waydroid somewhat secure) and their atomic stuff is an awesome base thanks to ublue.
You mention that their kernel is bloated, would you mind sharing how you measure it compared to other kernels. Such as their kernel vs something more trimmed down. Is it a storage space savings or memory? I’ve never really considered the weight of a kernel when considering different distros so if you have some method I’d love to try and compare what I’m running.
I have no comparisons as I think all distros ship the complete monolithic kernel. Of course specific IOT devices or Android ship a very much smaller kernel.
Building the kernel is not that hard, as you have
kernel-devel
which has all the sources.You can use
make menuconfig
and see what all is enabled (as far as I understood this) and change stuff before compiling.Monolithic kernels are pretty bad, see this excerpt of the interview with Jeremy Soller on RedoxOS.
So I dont mind memory or even less storage space, as the kernel poorly is not relevant at all here. I just care about keeping the root binary with access to all my stuff as small as possible.
I would love a system that detects the used hardware and then builds the correct small kernel for it. There are experiments making the CentOS LTS kernel work on Fedora, which would prevent many recompilations.
Yeah. Some myth that it’s hard to do is not why we end up with monolithic kernels. Like any case where you find yourself thinking “it doesn’t look that hard; I could do that easily”, it’s either harder than it looks or it’s done a certain way for an entirely different reason you haven’t figured out.
You should learn that reason.
There are many steps that need to be followed, I didnt have the time yet but it is possible. You need to sign the modules and kernel, package as an RPM, sign that maybe etc. Its not as easy if you do it right but also not very hard.
If you don’t mind me asking, then how do you know the kernel they use is bloated compared to any other kernel? A vast majority of the device-list stuff is loaded only when that device is detected with kernel modules. You aren’t actually running everything from the entire kernel, it just has support for the devices if it does detect them. which is basically the functionality you are asking for, ad-hoc device modules.
Monolithic kernels aren’t “bad”. That’s subjective. Monolithic kernels have measurable and significant performance benefits, over micro kernels. You also gain a massive complexity reduction. Micro kernels, historically, have not been very successful, e.g. Hurd, because that complexity management is extremely difficult. Not impossible, but so far kernel development has favored monolithic kernels not without reason.
If what you say is actually that easy, why wouldn’t all distro’s just do that during the install, and during updates with their package managers? I believe you could do this in Gentoo, but I don’t know if it has measurable benefits beyond what performance tuning for your specific CPU arch would give you. Since none of those devices you aren’t running are consuming any resources beyond the storage space of the kernel.
“Their kernel is bloated” :D I dont compare with anything, as a linux distros job is pretty much to make me forget other ways to get “the linux stuff” because they are so good.
(Imagine how good Linux support would be if everyone would be on the same distro family like Fedora rawhide/stable/oldstable/centos-stream/almalinux;rockylinux;rhel.)
If that is true, and if the kernel will never load anything not needed for my device, then I am fine with it.
I see how monolithic is less complex and also a huge performance benefit over having the handshake between userspace and kernel space all the time (a meta dev on #techovertea talked about that).
But I would still want to debloat the kernel from unused code, as it is there somewhere and may get activated and used (why would you blocklist kernel modules otherwise?)
Also compiling for x86_64-v4 would probably improve speed, and it is rediculous to have the entire distro built for 20 years old hardware, neglecting all the improvements from over a decade.
Here is an alternative Piped link(s):
this excerpt of the interview with Jeremy Soller on RedoxOS
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
It wouldn’t be too difficult™ to fork their kernel and make custom configs of it. Here’s the git repo that holds their rpms and their respective kernel configs, it’s just that nobody has cared enough to create/propose “slimmed down” specialized kernel images: https://src.fedoraproject.org/rpms/kernel/tree/rawhide You can just clone the repo and point COPR to it, then automatically build custom kernels.
Awhile ago there was a proposal to move the x86 microarchitecture level. Here’s recent discussion on that proposal: https://discussion.fedoraproject.org/t/what-happened-to-bumping-the-minimum-supported-architecture-from-x86-64-to-x86-64-v2/96787/2
In general, though, Fedora would not want to leave any users behind. Instead, the proposal for
hwcaps
is currently being drafted: https://pagure.io/fesco/issue/3151 With hwcaps, default installs will be x86_64 v1, but will be upgraded to “optimized” packages if available upon updating. This makes packaging a bit awkward, though. Packagers already need to maintain packages for multiple versions of the distro. In fact, they need to support F38, F39, F40, and rawhide atm. Needing to maintain an extra 3 builds for each package on top of x86, x64, aarch64, ppc64le, and s390x is a bit of a burden, so success might be limited.Distrobox, while feature-rich, is still a bit hacky (though it’s still more reliable in my experience than toolbx). You’re not the first to want this, though: https://github.com/fedora-silverblue/issue-tracker/issues/440
Secureblue removes a good amount of unused kernel component, and even some useful ones like bluetooth and thunderbolts, but you can always manually enable them.
Yes thar is the direction I am going to. But they just disable kernel modules from running, I dont know if that is as complete as simply not building them.
But if its possible, then everyone with amd or intel should block nouveau, and vice versa. Just keep it small.
Yeah, this is the old philosophy of the “run anywhere” philosophy of linux (or computers in general) that got us here. Another problem with stripping down kernel drivers is that swapping hardware component will require rebuilding the kernel, which regular user will definitely not be happy about.
It would be a problem because of how it is currently done.
I imagine an install ISO to have a monokernel, build the kernel-building-system and detect the needed drivers. Save the config and build the matching kernel from that.
Now if you want to swap hardware, there is a transition tool within the OS that allows to state the wanted hardware component and remove the old driver from the config.
Or you switch to a monokernel and run the hardware detection and config change again.
Or you use the install USB stick (which you already have) which already uses a monokernel and has a feature to detect hardware, change the config on the OS, build and install the kernel to the OS.
This is a bit more complex than for example what fedora plans with their new WebUI installer. Poorly such a system also doesnt work that well with so many kernel updates.
I am not an expert, but I feel like rebuilding the kernel is probably too slow for most user.
And kernel already dynamically load the kernel module, then disabling them would practically make sure they will not be loaded.
I feel like we don’t need to go down to micro-kernel to solve the problem of loading too many drivers.
What I really like about stuff like RedoxOS, COSMIC, typst, simpleX, Wayland and others is having stuff built from a modern perspective with modern practices.
Linux is ancient now, and its a miracle that it is thriving like this.
If dynamic loading really is that robust, it probably doesnt matter. But I dont know how big the performance increases are and I really need to do benchmarks before and after.
There are btw also some experiments on making tbe CentOS-Stream LTS kernel run on Fedora. Which would be another great way of getting a more stable system.