

Yes, that’s it. I added the link in the OP,
account feddit di @lgsp@urbanists.social
Yes, that’s it. I added the link in the OP,
Very arrogant answer. Good that you have intuition, but the article is serious, especially given how LLMs are used today. The link to it is in the OP now, but I guess you already know everything…
Thank you. I found the article, linkin the OP
Oh wow thank you! That’s it!
I didn’t even remember now good this article was and how many experiments it collected
I’m aware of this and agree but:
I see that asking how an LLM got to their answers as a “proof” of sound reasoning has become common
this new trend of “reasoning” models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading
take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?
So having a good written article at hand is a good idea imho
What are your interests?
Anyway you could try following this community: https://lemmy.world/c/peertube
Even if LLM “neurons” and their interconnections are modeled to the biological ones, LLMs aren’t modeled on human brain, where a lot is not understood.
The first thing is that how the neurons are organized is completely different. Think about the cortex and the transformer.
Second is the learning process. Nowhere close.
The fact explained in the article about how we do math, through logical steps while LLMs use resemblance is a small but meaningful example. And it also shows that you can see how LLMs work, it’s just very difficult