Oh, funny, I also have sentient AI at home that I developed, but choose not to release it. My mom also created one accidentally while baking a cake but it was to powerful and she also decided to best destroy it like it never existed. You know, for everyones safety.
next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher
Bullshit
“Our AI has cost more money that it would take to solve world hunger, tanked the microchip economy, and ruined the lives of thousands of people we’ve had to let go… And it’s stupid as all fucking hell. What do we do?”
“Say it broke containment and it’s too powerful to release. Foolproof!”
This is nonsense and just marketing.
Have you read what they have to say? They make a fairly convincing argument.
Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”
Like my dick

crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”
Roko’s basilisk wasn’t meant to be a brag!
Impressive marketing spin on “our product and deployment strategies are wildly insecure.”
But can it start a timer
How would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
How does the LLM check the timestamps without a prompt? By continually prompting? In which case, you are the timer.
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
That’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.
Edit to add: I know I’m not replying to the bad mansplainer.
LLM != TSR
Do people even use TSR as a phrase anymore? I don’t really see it in use much, probably because it’s more the norm than exception in modern computing.
TSR = old techy speak, Terminate and Stay Resident. Back when RAM was more limited (hey and maybe again soon with these prices!) programs were often run once and done, they ran and were flushed from RAM. Anything that needed to continue running in the background was a TSR.
Please tell me why you believe that the LLM keeps being executed on your chat even when the response is complete.
AI companies do this same tired schtick every time they release a model. If only they realized how amateurish it makes them look.
Grifters gonna grift.
Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.
Scam Altman
😆
No, its not too powerful. Its too chaotic. You cant control it.
EDIT: It seems I have misunderstood. I thought containment here referred to the harness, but they meant VM type of containment. I am still quite skeptical, but it looks like this model is quite good at finding and utilizing security flaws in software.
It may have blurted out something like “hey I know exactly how to end this economic suffering and all diseases globaly ! Its easy you just need to…”
Quick Hit the Red Button!!! Shut it OFF!!!
It was actually very well aligned
What do you mean?

sure_jen.gif
It leaks private data, like it’s own source
https://www.theguardian.com/technology/2026/apr/01/anthropic-claudes-code-leaks-ai
Not because it’s so smart, but because it’s so fucking stupid, and morons from Anthropic just click buttons without checking.
Lol. Ok, ai bromer.











