I’m planning to switch to RISC-V by 2030, and since this is new to me (I’m an old AMD64 (and i386) veteran), I wanted to ask what your thoughts and predictions are regarding performance, stability, and usability as a creator of all kinds of content, whether it’s music, movies, 3D, or watching cat videos on YouTube. I’m also planning to buy a new, fresh computer, maybe a laptop from around 2027/2028. Is that a good idea, or am I biting off more than I can chew? To sum up, I’m asking for your opinions, advice, warnings, and thoughts. Feel free to write not only answers to my questions but anything you consider important in the context of the RISC-V and Linux marriage in the near future


‘RISC-V is sloooow – Marcin Juszkiewicz’
Encountered this here on Lemmy a few days ago, haven’t looked into it properly. If you search for the article’s title, you should find the post and comments.
To my knowledge, modern CPUs have a lot of hardware acceleration for various common algorithms, specifically regarding media. This is orthogonal to the architecture itself, and I’m not sure that risc-v platforms implemented all that stuff, seeing as it’s been developed for x86/x64 over decades.
Pardon my ignorance but doesn’t having specialized acceleration functionality go against the whole “Reduced Instruction Set” thing?
RISC-V is designed to be an extensible instruction set, where the base is very minimal and reduced but a plethora of extensions exist. The ISA can be small for academic and microcontroller uses, large (more than a hundred extensions) for server uses, or anything in between.
Despite the name, a powerful RISC-V server can arguably not be considered “RISC”, though that term doesn’t have a single agreed-upon meaning and some design characteristics strongly associated with RISC still apply such as limiting memory access to dedicated load/store instructions only rather than allowing computation instructions to operate on memory.
Also, not everything is CPU instructions. Acceleration for media codecs, for example, normally means off-loading those tasks to the GPU rather than the CPU. Even if the CPU and GPU are both part of the same SoC, that doesn’t touch the CPU instruction set.
In theory probably yes, in practice from what I’ve heard ARM has some CISC-style instructions — presumably exactly because they offer performance increases.
even without going too-CISC it can make sense to have instructions for popular use cases
e.g. ARM has special instructions for optimal numeric operations in JavaScript: https://developer.arm.com/documentation/dui0801/h/A64-Floating-point-Instructions/FJCVTZS
and I thought I’d read something about custom instructions in Apple Silicon to optimise virtualisation (i.e. translation of x86_64 executables) but I can’t find a source for that, maybe that secret sauce is not in the instruction set
Yeah, what I’ve read is that ARM is in fact a mix of RISC and CISC. And meanwhile x64 processors turn some CISC instructions into a bunch of simpler ones as one of the first execution stages. So in the end the situation is basically this: