They’re not trivializing, just noting that the different things you need to discuss for kernel development compared with other work. It is very different in a lot of ways, and does shape your perspective. I also find it interesting.
They’re not trivializing, just noting that the different things you need to discuss for kernel development compared with other work. It is very different in a lot of ways, and does shape your perspective. I also find it interesting.
For the same reason spoken languages often have semantic structures that make a literal translation often cumbersome and incorrect, translating nontrivial code from one language into another without being a near expert in both langauges, as well as being an expert in the project in question, can lead to differences in behaviour varying from “it crashes and takes down the OS with it”, to “it performs worse”.
A rather overly simplistic view of filesystem design.
More complex data structures are harder to optimise for pretty much all operations, but I’d suggest the overwhelmingly most important metric for performance is development time.
Yes, but note that neither the Linux foundation nor OpenZFS are going to put themselves in legal risk on the word of a stack exchange comment, no matter who it’s from. Even if their legal teams all have no issue, Oracle has a reputation for being litigious and the fact that they haven’t resolved the issue once and for all despite the fact they could suggest they’re keeping the possibility of litigation in their back pocket (regardless of if such a case would have merit).
Canonical has said they don’t think there is an issue and put their money where their mouth was, but they are one of very few to do so.
Opengear in Brisbane; development teams often use Linux.
Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.
Premature optimisation could kill a project’s maintainability; wait a few years. Even then, despite Ken’s optimism I’m not certain we’ll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.
License incompatibility is one big reason OpenZFS is not in-tree for Linux, there is plenty of public discussion about this online.
I’ll also tac on that when you use cloud storage, what do you think your stuff is stored on at the end of the day? Sure as shit not Bcachefs yet, but it’s more likely than not on some netapp appliance for the same features that Bcachefs is developing.
In addition to the comment on the mentioned better hardware flexibility, I’ve seen really interesting features like defining compression & deduplication in a granular way, even to the point of having a compression algo when you first write data, and then a different more expensive one when your computer is idle.
This is actually a feature that enterprise SAN solutions have had for a while, being able choose your level of redundancy & performance at a file level is extremely useful for minimising downtime and not replicating ephemeral data.
Most filesystem features are not for the average user who has their data replicated in a cloud service; they’re for businesses where this flexibility saves a lot of money.
Regarding 1: if you open up dmesg after it happens and you see an error regarding “No edid read”, your GPU is having a hard time automatically getting the monitor’s edid over display port. My 7800xt has this issue.
If your monitor setup doesn’t change much, you can manually set the edid on a per output basis. Here is a good guide.
Also, regarding 3: you may need to set your amdgpu feature mask in your kernel parameters.
If it’s a G502/702, they’ve got a very fucky scroll wheel & middle click; it’s actually a lemon, but since nothing else works with the wireless pads they’re the only options.
Care to elaborate?
deleted by creator
Kernel modules don’t have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.
Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they’re a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.
Both Intel and AMD invest a lot into open source drivers, firmware and userspace applications, but also due to the nature of X86_64’s UEFI, a lot of the proprietary crap is loaded in ROM on the motherboard, and as microcode.
I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.
If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don’t expect them to be updated for new kernel compatibility; you’ve gotta do that yourself.
Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.
I’ve seen some optometry equipment running RHEL
About a year ago I moved to Hyprland & Wayfire for my NVIDIA & Intel boxes. Moved NVIDIA to Radeon a few months back and had mixed results.
Recently tried Plasma 6 for experimental HDR and am impressed.
Moving some packages (especially libraries) onto an unstable branch while keeping others back on a stable one. It probably won’t fuck you immediately, but when it does it’ll be a bastard to diagnose because you will have forgotten what you did.