• Fedditor385@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    6 days ago

    Oh, funny, I also have sentient AI at home that I developed, but choose not to release it. My mom also created one accidentally while baking a cake but it was to powerful and she also decided to best destroy it like it never existed. You know, for everyones safety.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      6 days ago

      “Our AI has cost more money that it would take to solve world hunger, tanked the microchip economy, and ruined the lives of thousands of people we’ve had to let go… And it’s stupid as all fucking hell. What do we do?”

      “Say it broke containment and it’s too powerful to release. Foolproof!”

    • quips@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      6 days ago

      Have you read what they have to say? They make a fairly convincing argument.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    64
    ·
    7 days ago

    Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”

  • GuyIncognito@lemmy.ca
    link
    fedilink
    English
    arrow-up
    61
    ·
    7 days ago

    crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”

    • Gladaed@feddit.org
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      10
      ·
      7 days ago

      How would it do that?

      It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.

      You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.

      • YesButActuallyMaybe@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        7 days ago

        You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?

        • stringere@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          6 days ago

          How does the LLM check the timestamps without a prompt? By continually prompting? In which case, you are the timer.

          • YesButActuallyMaybe@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            8
            ·
            7 days ago

            It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it

            • Gladaed@feddit.org
              link
              fedilink
              English
              arrow-up
              12
              ·
              7 days ago

              That’s not how that works.

              LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.

              • stringere@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 days ago

                Edit to add: I know I’m not replying to the bad mansplainer.

                LLM != TSR

                Do people even use TSR as a phrase anymore? I don’t really see it in use much, probably because it’s more the norm than exception in modern computing.

                TSR = old techy speak, Terminate and Stay Resident. Back when RAM was more limited (hey and maybe again soon with these prices!) programs were often run once and done, they ran and were flushed from RAM. Anything that needed to continue running in the background was a TSR.

                • Gladaed@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 days ago

                  Please tell me why you believe that the LLM keeps being executed on your chat even when the response is complete.

  • LiveLM@lemmy.zip
    link
    fedilink
    English
    arrow-up
    62
    ·
    7 days ago

    AI companies do this same tired schtick every time they release a model. If only they realized how amateurish it makes them look.

  • GnuLinuxDude@lemmy.ml
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    7 days ago

    Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.

  • Mohamed@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    ·
    edit-2
    6 days ago

    No, its not too powerful. Its too chaotic. You cant control it.

    EDIT: It seems I have misunderstood. I thought containment here referred to the harness, but they meant VM type of containment. I am still quite skeptical, but it looks like this model is quite good at finding and utilizing security flaws in software.

    • AoxoMoxoA@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      It may have blurted out something like “hey I know exactly how to end this economic suffering and all diseases globaly ! Its easy you just need to…”

      Quick Hit the Red Button!!! Shut it OFF!!!