• 0 Posts
  • 268 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • The main thing (by far) degrading a battery is charging cycles. After 7 years with say 1,500 cycles most batteries will have degraded far beyond “80%” (which is always just an estimate from the electronics anyway). Yes, you can help it a bit by limiting charging rate, heat and limit the min/max %, but it’s not going to be a night and day difference. After 7 years with daily use, you’re going to want to swap the battery, if not for capacity reduction then for safety issues.


  • I think I have a simple function in my .zshrc file that updates flatpaks and runs dnf or zypper depending on what the system uses. This file is synced between machines as part of my dotfiles sync so I don’t have to install anything separate. The interface of most package managers is stable, so I didn’t have to touch the function.

    This way I don’t have to deal with a package that’s on a different version in different software repositories (depending on distribution) or manually install and update it.

    But that’s just me, I tend to keep it as simple as possible for maximum portability. I also avoid having too many abstraction layers.



  • Technically, wired charging degrades the battery less than wireless charging, mainly because of the excessive heat generated by the latter. The same way slower wired charging generates less heat. Lower and upper charging limits also help (the tighter the better).

    But I personally don’t bother with it. In my experience, battery degradation and longevity mostly comes down to the “battery lottery”, comparable to the “silicon lottery” where some CPUs overclock/undervolt better than others. I’ve had phone batteries mostly charged with a slow wired charger degrade earlier and more compared to almost exclusively wireless charging others. No battery is an exact verbatim copy of another one. Heck, I had a 2 month old battery die on me after just ~20 cycles once. It happens.

    Sure, on average you might get a bit more life out of your batteries, but in my opinion it’s not worth it.

    The way I see it with charging limits is that sure, your battery might degrade 5% more over the span of 2 years when always charging it to 100% (all numbers here are just wild estimates and, again, depend on your individual battery). But when you limit charging to 80% for example, you get 20% less capacity from the get go. Unless of course you know exactly on what days you need 100% charge and plan your charging ahead of time that way.

    Something I personally could never be bothered with. I want to use my device without having to think about it. If that means having to swap out the battery one year earlier, then so be it.







  • I always hear power efficiency as an argument that ARM chips are magically better at, but Ryzen AI 300 and Intel Core Ultra 200V series seem to be very competitive with Qualcomm’s offering. It’s hard to compare 1:1 as the same chip in different laptops can be configured very differently in terms of TDP and power curves and the efficiency “sweet spots” aren’t the same for all these different chips. Core Ultra 200V is also awaiting more thorough testing, but it seems to be right up there with the Snapdragon.

    I honestly found the Snapdragon X very underwhelming after all that marketing of how much better it was than Apple’s M3 and Intel’s and AMD’s offerings. By the time the Snapdragon was actually available in end-user products, AMD’s and Intel’s competing generations were right around the corner and we’ve also seen a vastly improved M4 chip (although only in an iPad so far, so meh). Add to that the issues that you’ll encounter because while Windows’ x86 to ARM translation layer has certainly improved, it’s nowhere near as seamless as what Apple did.






  • Best case is that the model used to generate this content was originally trained by data from Wikipedia so it “just” generates a worse, hallucinated “variant” of the original information. Goes to show how stupid this idea is.

    Imagine this in a loop: AI trained by Wikipedia that then alters content on Wikipedia, which in turn gets picked up by the next model trained. It would just get worse and worse, similar to how converting the same video over and over again yields continuously worse results.



  • It’s kind of in the word distribution, no? Distros package and … distribute software.

    Larger distros usually do a quite a bit of kernel work as well, and they often include bugfixes or other changes in their kernel that isn’t in mainline or stable. Enterprise-grade distributions often backport hardware support from newer kernels into their older kernels. But even distros with close-to-latest kernels like Tumbleweed or Fedora do this to a certain extent. This isn’t limited to the kernel and often extends to many other packages.

    They also do a lot of (automated) testing, just look at openQA for example. That’s a big part of the reason why Tumbleweed (relatively) rarely breaks. If all they did was collect an up-to-date version of every package they want to ship, it’d probably be permanently broken.

    Also, saying they “just” update the desktop environment doesn’t do it justice. DEs like KDE and GNOME are a lot more than just something that draws application windows on your screen. They come with userspace applications and frameworks. They introduce features like vastly improved HDR support (KDE 6.2, usually along with updates to Wayland etc.).

    Some of the rolling (Tumbleweed) or more regular (Fedora) releases also push for more technical changes. Fedora dropped X11 by default on their KDE spin with v40, and will likely drop X11 with their default GNOME distro as well, now that GNOME no longer requires it even when running Wayland. Tumbleweed is actively pushing for great systemd-boot support, and while it’s still experimental it’s already in a decent state (not ready for prime time yet though).

    Then, distros also integrate packages to work together. A good example of this is the built-in enabled-by-default snapshot system of Tumbleweed (you might’ve figured out that I’m a Tumbleweed user by now): it uses snapper to create btrfs snapshots on every zypper (package manager) system update, and not only can you rollback a running system, you can boot older snapshots directly from the grub2 or systemd-boot bootloader. You can replicate this on pretty much any distro (btrfs support is in the kernel, snapper is made by an openSUSE member but available for other distros etc.), but it’s all integrated and ready to go out of the box. You don’t have to configure your package manager to automatically create snapshots with snapper, the btrfs subvolume layout is already setup for you in a way that makes sense, you don’t have to think about how you want to add these snapshots to your bootloader, etc.

    So distros or their authors do a lot and their releases can be exciting in a way, but maybe not all of that excitement is directly user-facing.


  • Even at early bird pricing (39,-€) I’d rather get a cable that has the specs I need.

    This seems to do a little bit more than simply list the specs (show shorted pins and whatnot), but it doesn’t do any kind of load testing (tests like does sending 240 watts over the wire somehow interfere with the data transfer).

    Most of the cables that I have lying around are USB 2.0 100 watts PD, as that’s what most devices come with that have a cable in the box. For other cables I know what they’re capable of because I read the spec sheet before purchasing them.

    This might be useful to shops who sell refurbished phones that want to quickly check whether used USB-C cables are still good, but I don’t see why anyone would want this for personal use.