The uhh a16 I think is four gpu card intended for remote working that would be a natural fit to this. Except that it has no outputs.
You can do what you’re asking about in x, but I don’t think in Wayland.
The uhh a16 I think is four gpu card intended for remote working that would be a natural fit to this. Except that it has no outputs.
You can do what you’re asking about in x, but I don’t think in Wayland.


Making something not the default then removing it because it isn’t widely used (because it’s now disabled by default and users have to know it exists and then turn it on) is the gnome way.
Make no mistake, they’re trying to remove features they don’t like. There are lots of people involved in free software because they didn’t get to be in control of nonfree software.


Grimly: “year of the linux desktop”


You need an sd card adapter that lets you read and write the sd card from your pc to put an image the pi can boot onto the sd card.
You will need this anyway when you eventually run into the sd card having a bunch of of bad blocks or unreadable sectors.
It will work ”fine” for what you’re describing but consider getting one of those sata/m2 adapter boards so your root filesystem isn’t based on the media explicitly designed for temporarily holding information until the user can get back to a computer.
If you already have a computer, just set up a vm.


Since you dont know what’s happening you dont need to be fucking around with busybox. Boot back into your usb install environment (was it the live system or netinst?) and see how fstab looks. Pasting it would be silly but I bet you can take a picture with your phone and post it itt.
What you’re looking for is drives mounted by dynamic device identifiers as opposed to uuids.
Like the other user said, you never know how quick a drive will report itself to the uefi and drives with big cache like ssds can have hundreds of operations in their queue before “say hi to the nice motherboard”.
If it turns out that your fstab is all fucked up, use ls -al /dev/disk/by-uuid to show you what th uuids are and fix your fstab on the system then reboot.


Perhaps it’s the misty air of memory, but I truly hope this new driver is as good as the 20 year old one we used to use…


Thanks for the informative and detailed answer! I’ve only ever installed and used arch for fun so the finer points of how pacman handles manually installed packages never came up.
You said mostly safe, what kinds of issues can doing what you just described cause? You said pinning it through pacman would be an unsupported partial upgrade, even though that would give the package manager visibility on what you’re trying to do it would result in types of dependency resolution that aren’t supported or tested for I imagine?


Yeah I didn’t want to make the bold and refreshing assertion that arch isnt appropriate for situations where gracefully handling an old package is a requirement but that was my initial read on the situation.


I’m not as familiar with the aur as I am with apt and now dnf, is there a function to keep it from automatically installing something newer? That’s why I meant when I referred to pinning.


If arch doesn’t have version pinning then switch to a distribution that does.
Debian has version pinning, nvidia runs a third party repository and it has a pinning package you can install to get and stay with the 580 branch.
Pardon, your replies in this comm. It’s not precise language on my part but I think the meaning should be clear.
Without knowing what games you want to run or what your budget is it would be hard to give more helpful input than “anything will work, give serious consideration to not virtualizing”.
What were you looking for, models and specs?
E: you are absolutely looking for models and specs. I assumed you were just feeling around to figure stuff out because of your other posts in this comm. My apologies.
The short answer is that it doesn’t matter for the requirements you’ve given. Just to make sure I wasn’t lying when typing that I created and ran a windows 11 vm under kvm running on Debian installed on an old thinkpad from ten years ago and it ran fine. The specs were i5-3320m 16gb ram. I was able to start and run affinity and nuclear throne. I only made a 30gb qcow device for that vm so you probably don’t need a 1tb disk…
Assuming you want to run more modern games, both the recent (<5 or so years ago) intel and amd integrated graphics perform decently on 1440 and 1080 which is what a lot of laptops have for screens.
Laptops with replaceable ram are rarer than they once were, but can still be had and any laptop with ddr4 will be less expensive than one with ddr5. You don’t seem to have any use case that needs faster ram, so that’s a cost/performance tradeoff you may be willing to make.
I would personally stay away from “gaming” oriented laptops because they’re generally optimized around performance and price with build quality, durability and longevity left by the wayside.
So for specs I’d say a recent cpu with igpu (it’s hard to find one in a lap nowadays that doesn’t have the igpu!), 16gb of ddr4 if it’s upgradable and 32gb of ddr4 if it’s not and maybe 512gb of storage if it’s soldered and 256 or whatever if it’s not.
Again, if you have specific games you want to run then that changes things.
Most games run good under wine/steam. Most of the ones that don’t are using programming techniques intended to catch a vm, hypervisor or host os like anti-cheat.
So you can probably take gaming off your vm uses list. If you can’t because you wanna run games that use anti cheat as above, skip to the bottom of this reply.
I do not use affinity, but my experience with applications that have an “output” like design, modeling or productivity is that it’s often not worth it to run the under some compatibility layer or virtualization system. Every time you start that program up you need it to run so you can blast out an idea, show someone how the project is going or open something someone sent you and it’s infinitely more frustrating to have to figure out what changed since last night to make it not work or cause the magic marker brush (and only the magic marker brush!) to cause an immediate crash. This might also be a “jump to the end” scenario. Try it first and see though!
Windows 11 has relaxed requirements for its iot versions. It both loads less into cache and requires less memory in addition to opening up to CPUs as far back as third and fourth generation Intel core chips from 14 years ago. So use that version of windows for your vm and you can easily scrape by with 16gb of ram if you see yourself needing to.
Most people like amd gpus better on linux, I tend to like nvidia better at the moment. I have a lot of experience with linux and high tolerance for troubleshooting though so your mileage may vary.
This is some counterintuitive input and I will not be answering questions about it, take it or leave it: if you plan to keep your computer for a while, buy something with a cpu manufactured on the largest “process” you can reasonably accept. As chips’ features get smaller and smaller it takes less time and energy for electromigration to fundamentally change their behavior.
If you find yourself needing to run games or even software packages that care deeply about knowing they’re on bare metal windows, just dual boot. It will only take a little time to boot back and forth and the only prerequisites are learning your distros grub repair process for if windows overwrites your bootloader and keeping backups so you don’t panic which you should be doing anyway.
You can use any distribution but will most likely have to load the Broadcom wireless modules manually.
If you’re able to use a wired network connection then this is no problem and might not even be something you’re worried about.
When you do decide to get wireless running, make sure to figure out a way that’s copacetic with your chosen distributions package management so everything “just works” on a system update. If you don’t take the time to integrate third party modules into package management then system updates can unpredictably break the functionality those modules provide. You may not remember what you did, how to reproduce its effects or even that you did it in the first place, leading to some pretty unenjoyable situations.
Consider keeping macos on there and dual booting: you will need it for any firmware updates, it’s a good fall back when something breaks and when you want to sell or give away the machine you’ll use macos to get it back to good to other people. Many Intel macs can have their macos installation loaded onto a usb device and depending on how you do the bootloader and efi situation still easily start it up.
No reason to be skeptical, teams and groups are very trustworthy so teamgroup is a lock.
Reseat the stick you installed and run memtest 86.
It’s more likely that you have a badly installed stick or a faulty stick than consumer memory controllers in the last 20 years care about the installed memory being the same.
In the version before sequoia you can choose to uncheck “draw window contents while dragging”, that will make it only draw an outline until you release the drag. In my experience that setting stops a lot of slowdown and hanging when moving between monitors.
I can’t say for sure, because idk which 2017 you have or what monitor you have, but it may also be related to the monitor not supporting the same dpi or colorspace as the built in does. In those circumstances a hang when moving windows between screens comes from the video card swapping resources in and out furiously to show everything.
I don’t know what you mean when you say “switch screens.” Like in Mission Control or switching workspaces?
The last two that you’re talking about can be alleviated by hotkeys. Option command d toggles the dock, option q quits and option command esc force quits. Make sure you have the correct program up front before you do this.
If you absolutely cannot live without clicking the red button and knowing the program closed, there’s a bunch of little programs out there that change the behavior to what you want. I don’t recommend this though, because you’ll feel lost when working with a computer that isn’t your specific customized device.
What’s kinda funny switching between windows, macs and different Linux systems is that the windows and Linux gui elements act mostly the same but the hotkeys are all different and the mac and windows hotkeys are mostly the same but the gui elements act real different.
My apologies for not having definitive links and answers like above, I’m not in front of a million computers at the moment and you can’t trust what you just read online.
What kinds of window management changes would you like to make? Iirc you can make some decently radical adjustments using brew but I’m not extremely knowledgeable about them because I like to keep it normal.
I don’t have an accord, I can’t save at bonfires.
Do journalctl and look at what’s happening.