• 2 Posts
  • 64 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle

  • I assume the “kill it” comment was a little tongue-in-cheek. On small SBCs, like a Pi, or old hardware, it could be a problem. I’ve seen people with flatpaks taking up 30GB of space, which is significant. I’m not sure how much RAM it wastes. I assume running 6 different applications that have loaded 6 different versions of Qt libraries would also use significantly more RAM than just loading the system’s shared Qt libraries once.







  • AI are people, my friend. /s

    But, really, I think people should be able to run algorithms on whatever data they want. It’s whether the output is sufficiently different or “transformative” that matters (and other laws like using people’s likeness). Otherwise, I think the laws will get complex and nonsensical once you start adding special cases for “AI.” And I’d bet if new laws are written, they’d be written by lobbiests to further erode the threat of competition (from free software, for instance).






  • The problem is that HP writes drivers and software for those things for Windows, but not for Linux, so Linux depends on random people to write software for those things for free (which often involves complex reverse-engineering). With Linux you need to make sure you use widely-used hardware that someone has already written support for (this is mostly applicable to laptops and peripherals, which often use custom non-standard hardware). There may be a way to fix your problems, but you’ll have to search forums or issue trackers for the solutions, and they’re probably pretty involved to get working correctly. The router crashing thing is probably just a coincidence though, or the laptop is using a feature that’s broken on your router.








  • I thought the tuning procedures, such as RLHF, kind of messes up the probabilities, so you can’t really tell how confident the model is in the output (and I’m not sure how accurate these probabilities were in the first place)?

    Also, it seems, at a certain point, the more context the models are given, the less accurate the output. A few times, I asked ChatGPT something, and it used its browsing functionality to look it up, and it was still wrong even though the sources were correct. But, when I disabled “browsing” so it would just use its internal model, it was correct.

    It doesn’t seem there are too many expert services tied to ChatGPT (I’m just using this as an example, because that’s the one I use). There’s obviously some kind of guardrail system for “safety,” there’s a search/browsing system (it shows you when it uses this), and there’s a python interpreter. Of course, OpenAI is now very closed, so they may be hiding that it’s using expert services (beyond the “experts” in the MOE model their speculated to be using).