• 0 Posts
  • 56 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle
  • As far as I was aware AMDGPU is used by default on most if not all distros

    I really don’t think that’s the case, assuming you’re talking about AMDVLK (amdgpu is the kernel module used by all three Vulkan drivers - RADV, AMDVLK and the Vulkan driver from AMDGPU-PRO). Ubuntu and Fedora definitely default to RADV, and Arch Wiki recommends RADV unless you need something from the other drivers.

    I noticed a performance increase after forcing RADV on NixOS so not really sure.

    NixOS seems to default to RADV according to their Wiki. If this was a few years ago then maybe you might be confusing it with the ACO shader compiler for RADV? That brought a significant performance increase and eventually became the default in RADV. I remember using custom Mesa (the project that develops open source graphics drivers, like RADV and radeonsi) builds to massively reduce stuttering in DirectX games.


  • I personally chose RADV after looking into this myself and the only drawback from my understanding is that they are proprietary drivers.

    RADV is the open-source community developed Vulkan driver. It has the widest hardware support of the three Vulkan drivers and is generally the best for gaming.

    AMD provides two more Vulkan drivers - AMDVLK is the open-source one available in AMDGPU, then there’s the unnamed proprietary Vulkan driver in AMDGPU-PRO. The biggest advantage of the proprietary one is that it is certified - doesn’t matter most of the time, but when it does, a missing certification is a deal breaker.


  • That depends a lot on how the license gets interpreted and how license violations are handled by the local law. The argument for why the end user cannot do anything about GPL violation is that the violated contract is between upstream and the “bad” developer - the upstream project gave the bad developer access to their source code under the condition that the license stays the same. You as the end user only get exposed to the bad developer’s license, so you can’t do anything. It’s the upstream who must force them to extend a proper license to you.

    However there was also a case recently where the FSF argued that this interpretation / handling of the situation is against the spirit of GPL and I think they won, so… Yeah, it’s just unclear. Which is normal for legal texts (IMHO intentionally, but I’m not here to rag on lawyers, so I’ll leave it at that).


  • While I agree with your view (at least when it comes to firmware, especially given that hardware that doesn’t require a firmware upload on boot generally just has the very same proprietary firmware on a built-in memory, so the only difference is that you don’t get to even touch the software running on it), the point of this project is to remove non-libre components from coreboot/libreboot.

    It doesn’t differentiate itself from upstream in any other way, so if it fails to do the one thing it was made to do, then that’s in fact a newsworthy fact.


  • I do not know of any such dongle, but I’d like to ask you a question if you don’t mind: are you looking for a dongle with open-source firmware, or would a dongle that has its (proprietary) firmware stored in some onboard memory be acceptable?

    The second option wouldn’t require you to install any proprietary firmware on your computer, but you’d still rely on the proprietary firmware for the device to run. And it might also exist, unlike a dongle with FOSS firmware.


  • I know this isn’t Reddit, but r/peopleliveincities… When 90% of desktop users use Windows, it’s going to both be the most targeted by malware developers and have the highest chance of being operated by someone who doesn’t understand enough about computers to recognize that the shiny calculator app that just popped up after visiting a very legit Nigerian prince’s crowdfunding page probably shouldn’t need admin access.

    And speaking of user error, I’m willing to bet that basic security practices like using full disk encryption, SecureBoot, some MAC layer (provided by antivirus on Windows, AppArmor/SELinux on Linux) and regularly applying security updates are way more common over in the Windows land - if I was in a situation where there was one completely randomly selected Windows PC and one also completely randomly selected Linux PC, and my life depended on being able to gain access to either of them (some kind of really messed up Saw trap? idk), I would definitely bet my life on the Linux one being misconfigured.

    Don’t get me wrong, Linux can make for a very secure and private OS, but most installs most definitely cannot be described as such - just look at the popularity of random unverified PPAs on Ubuntu derivatives or AUR packages on Arch.


  • A reasonable build of the kernel optimized for virtualization won’t take more than a few tens of megabytes of RAM (and it will have support for memory ballooning, so the virtualized kernel will give the memory it doesn’t need back to the host), and the userspace will need to be separate anyway due to how different Android is to normal Linux distros.

    Containers are nice when you want to run dozens of separate services on the same server or want to get the benefits of infrastructure as code, but in this case they would provide minimal benefits at the cost of having no way of loading any kernel modules not built into whatever ancient kernel version your SoC manufacturer decided you have to use on your phone. Also, container escape vulnerabilities are still a bit more common than full VM escape, so this is also good for security on top of being more useful.



  • Even on my home server (a desktop with 64 gigs of ram) the ram check takes longer than the OS.

    I was pretty sure I messed something up when I upgraded the RAM in my desktop from 16 to 64 gigs and it wouldn’t output any signal for solid 10 seconds, lol. And the regular 5 second black screen on normal boots was still something I had to get used to coming from maybe a second with 16 GB


  • Your argument is to have 2 subtly incompatible abis and one day binaries magically break.

    Whereas your argument seems to be to have a special C variant for 32bit Linux - there’s no reason to have a special time64_t anywhere else.

    No program with time32_t will ever work after 2038, so any compiled that way are broken from compilation.

    Yeah, so what will breaking the ABI do? Break it a bit more?

    If you really want to be clever, mangle the symbols for the functions that handle time so they encode time64 as appropriate

    That’s what MUSL libc does, and the result is two subtly incompatible ABIs - statically linked programs are fine, but if a dynamically linked library exports any function with time_t parameter or return value, it will use whatever size was configured at build time and it becomes a part of its ABI. So fixing this properly would require every library that wants to pass time_t values in its API to implement its own name mangling. That’s not a reasonable request for a barely used platform (remember, this is just 32bit userland, 64bit was always unaffected).


  • Ah, the joys of requiring non-standard library calls for apps to function.

    The problem is that this approach breaks the C standard library API, which is one of the few things that are actually pretty universal and expected to work on any platform. You don’t want to force app developers to support your snowflake OS that doesn’t support C.

    The current way forward accepted by every other distro is to just recompile everything against the new 64-bit libraries. Unless the compiled software makes weird hardcoded assumptions about sizes of structs (hand-coded assembly might be one somewhat legitimate reason for that, but other distros have been migrating to 64-bit time_t for long enough that this should have been caught already), this fixes the problem entirely for software that can be recompiled.

    That leaves just the proprietary software, for which you can either have a separate library path with 32-bit time_t dependencies, or use containers to effectively do the same.

    Sneaky edit: why not add new 64-bit APIs to C? Because the C standard never said anything about how to represent time_t. If the chosen implementation is insufficient, it’s purely on the platform to fix it. The C17 standard:

    The range and precision of times representable in clock_t and time_t are implementation-defined.



  • But Wayland isn’t a thing on its own, there’s no “Wayland server” or anything else equivalent to the X server. The compositors like Kwin or GNOME’s Mutter are Wayland implementations fully responsible for handling the display output.

    You can blame Wayland for the lack of universally supported global hotkeys or for issues with apps that need to know exactly where on the screen they are - these are issues with the protocol - but not for bugs in one compositor’s implementation of display management.




  • I can’t speak for these specific laptops, but unlike x86, ARM generally doesn’t have a way for an OS to discover the available hardware, and most ARM platforms historically didn’t do anything to help. There is a standard for UEFI on ARM where the UEFI is supposed to tell the OS about the hardware, but as far as I know this is only a thing on ARM servers and these laptops might not support it.

    Without any way of probing for hardware or getting the information from UEFI, Linux has to somehow be compiled with all the info about the hardware built-in. And the build will be model-specific (there’s a way to pass a file describing the hardware to Linux from the bootloader which enables a single kernel to be used on multiple models and have just a small part of the bootloader be model-specific, but somebody still needs to make that file and the manufacturers clearly don’t intend to do that).



  • As the other person said, what you’re doing is pretty much emulating the behavior of tiling window managers. Edit while writing: I’m leaving the rest here because you might find it useful, but I’ve just realized that there’s a tiling extension for GNOME (the desktop environment used by Ubuntu): Tiling Shell. That’s definitely going to be the most painless way for you to try out tiling. There’s also bound to be something similar available for KDE.

    I think you will get a much better result than with virtual screens by configuring one to your taste, assuming you’re willing to spend a few hours learning all the ins and outs (it’s absolutely OK if you’re not willing to do that).

    Here’s links to a few of them, you should be able to install them in whatever distro you prefer:

    Hyprland - a tiling WM focused on good out of the box experience and animations (but it’s still very configurable). If you want to get your feet wet with standalone tiling WMs as fast and painlessly as possible, this is IMHO the way

    Sway - a more keyboard-centric tiling WM that leaves out the fancy stuff (for example I don’t think there’s any way to do window shadows or animations for all the window manipulation) and focuses on just being fast and efficient if you learn its concepts. This is the only one I’ve ever used for longer periods of time.

    SwayFX - “Sway, but with eye candy!” - I don’t think I can write a better description - has some graphics effects like window blurring or shadows.


  • They probably fixed all the bugs they considered essential, and the rest is just nice to have fixes that can be moved to the next cycle if necessary (and they still have a week to work on them before release, although they might be careful not to introduce severe bugs now).

    The general idea with this approach is that it doesn’t make sense to block a release on a few bugs worked on by only a subset of available developers and having the rest idle - the project can be finished faster by moving the remaining tasks over to the next release and accepting the bugs in the meantime.