• 0 Posts
  • 210 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle




  • It’s an illusion. People think that because the language model puts words into sequences like we do, there must be something there. But we know for a fact that it is just word associations. It is fundamentally just predicting the most likely next word and generating it.

    If it helps, we have something akin to an LLM inside our brain, and it does the same limited task. Our brains have distinct centres that do all sorts of recognition and generative tasks, including images, sounds and languge. We’ve made neural networks that do these tasks too, but the difference is that we have a unifying structure that we call “consciousness” that is able to grasp context, and is able to loopback the different centres into one another to achieve all sorts of varied results.

    So we get our internal LLM to sequence words, one word after another, then we loop back those words via the language recognition centre into the context engine, so it can check if the words match the message it intended to create, it checks them against its internal model of the world. If there’s a mismatch, it might ask for different words till it sees the message it wanted to see. This can all be done very fast, and we’re barely aware of it. Or, if it’s feeling lazy today, it might just blurt out the first sentence that sprang to mind and it won’t make sense, and we might call that a brain fart.

    Back in the 80s “automatic writing” took off, which was essentially people tapping into this internal LLM and just letting the words flow out without editing. It was nonesense, but it had this uncanny resemblance to human language, and people thought they were contacting ghosts, because obviously there has to be something there, right? But it’s not, it’s just that it sounds like people.

    These LLMs only produce text forwards, they have no ability to create a sentence, then examine that sentence and see if it matches some internal model of the world. They have no capacity for context. That’s why any question involving A inside B trips them up, because that is fundamentally a question about context. "How many Ws in the sentence “Howard likes strawberries” is a question about context, that’s why they screw it up.

    I don’t think you solve that without creating a real intelligence, because a context engine would necessarily be able to expand its own context arbitrarily. I think allowing an LLM to read its own words back and do some sort of check for fidelity might be one way to bootstrap a context engine into existence, because that check would require it to begin to build an internal model of the world. I suspect the processing power and insights required for that are beyond us for now.








  • I cut a big nerve in my thumb years ago, and apparently plastic surgeons fix that sort of thing.

    They reattached the nerve bundles, but I was told the sheathes could be realigned, but the nerves would have to grow back from the point of the cut all the way to the skin.

    At first one half of my thumb was entirely numb, and over the course of well over a decade I’d get pins & needles as bunches of nerves would finish regrowing, except attached to random channels in the nerve bundle, so my brain had to completely remap all those signals to what they actually meant. Also extreme nerve pain near the cut whenever it was bumped, I assume because many nerves just didn’t grow successfully and remained near that site.

    It felt super weird because hot, cold, pain & touch were all mixed up, but eventually my brain sorted them out. It still feels a little weird, especially near my nail, but I haven’t had a pins & needles experience for a few years.

    The problem with doing that with a neck is that it would take wayyy longer and the chances of the patient dying from complications due to no brain signals working right… yeah I don’t see medical science fixing this unless we can regrow nerves in a much shorter span of time.







  • I’ve told you how the concepts apply, if you found it confusing you could ask. You didn’t.

    But you’ve admitted you’re not actually interested in my answers, you just want to accuse me of pulling things out of my arse:

    I was simply establishing the fact that neither of us had them.

    I don’t know why I’d bother with someone whose only point here is to tear down whatever I’m saying. You don’t even seem to have a position.


  • Excrubulent@slrpnk.nettoTechnology@lemmy.worldYoutube has fully blocked Invidious
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    That’s not how that works. I told you the point I had a problem with and wanted sourced, and you admitted it was pure speculation.

    If you are skeptical about anything specific I’m saying, you can ask for the same thing. You didn’t, you just said I hadn’t sourced anything, which wasn’t true, I gave you links so you could educate yourself, and since you’re still confused on what any of it means, apparently you didn’t do that. When I asked you what you wanted specifically sourced, you named everything, which is as pointless as naming nothing.

    This is presumably because you don’t actually care about sources, you were just embarrassed that you had to admit it was pure speculation and you wanted to project that back at me.

    If you’re actually curious to understand what I’m saying, you can ask a specific question, but you’re not doing that. If you’re just going to keep insisting that I’m pulling things out of my arse, you’re wrong, but I won’t keep replying.