

Yeah, exactly. This argument about YouTube isn’t relevant to Nebula.
Yeah, exactly. This argument about YouTube isn’t relevant to Nebula.
Car infrastructure was a mistake. Automation isn’t the solution, it’s less cars and car-based spaces.
Fuck ICE. pieces of shit
“Well, you saved us from the fascists, but it is illegal for you to be a furry and the penalty is death”
“But that was one of the laws the fascists made! Why the fuck would you honor that?”
“They take the low-road and we take the high-road. If you only ever remove legislation the other side enacts, you’ll never get anything done, ya know?”
I mean, didn’t he famously steal the idea?
Just one note, restaurant prices go up because uber eats charges a percentage based fee for each menu item. So, restaurants need to up the prices on the app just to make the same amount of money. Just some good ol’ under-the-table fuckery courtesy of Silicon Valley bastards.
Did you consider that Microsoft lawyers said prices wouldn’t go up? Cause they did, they did say that, which is why the merger was approved. Do you think lawyers can just lie? Don’t you think it’s much more fair now that companies can make pinky promises that prices won’t go up before they become more monopolistic. That’s just good business for both customers and businesses /s
… why are you boldly speculating on OP’s language status? That’s pulled directly from the article
Checked other sources, the restriction is only in place for three years.
Surprise Medical Billing has mostly been nerfed after the No Surprises Act here in the US. After 2022, so long as you went to an INN provider, then you can’t be charged OON pricing for any OON services that you may have encountered during that visit.
Source: https://www.health.state.mn.us/facilities/insurance/managedcare/faq/nosurprisesact.html
Also, I work in insurance as a software engineer
… the location? Hezbollah is not Hamas. Lebanon is not Gaza.
Did you hit your head? This comment makes no sense
This is a patent lawsuit, not copyright
Sarcasm tag exists, it should be used. If you’re getting downvoted because you didn’t use the tag… good. We don’t need lemmy to breed nazis/assholes
The tag does two things. One, for people like me (ND ASD) the tag allows us to actually understand what you’re trying to say. Kind of a dick move to exclude people from the conversation. Two, and far more important, it’s no longer a question if you’re being sarcastic. That doesn’t sound important until you remember that 4chan literally brought back nazis because they thought ironic nazism was hilarious. And then all the closeted nazis used that to say “none of those people are being ironic, they all mean what they’re saying” to new, properly primed recruits.
My only problem with that is that they lobotomized Google to make the AI seem valuable. Not that they weren’t going to destroy Google’s utility eventually, but, once generative AI entered the scene, it deteriorated with a quickness.
Generative AI, as it is being built right now, is a dead-end. It won’t get much better than it currently is (markedly worse once the next-gen is forced to scrape data that includes AI generated data) and hallucinations are always going to be the reality for them.
It’s why there’s this big push over the last couple of years to get these products to market. Not because you’re going to corner some burgeoning industry (though the hype definitely is designed to look like that), but because this is a grift now and you have to get the goods while there’s still goods to get. Need to recoup those R&D dollars somehow.
Having read the article and then the actual report from the Sakana team. Essentially, they’re letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team’s guardrails on it. Not because it’s become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn’t be the one steering any code base, because they don’t give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn’t highly controlled like this.
Listen, I’ve been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.
It’s semi-autonomous, not remote controlled
Don’t bring attention to their privilege! They just want to go back to being blissfully ignorant! /s
No, it isn’t. It’s a fancy next word generator. It knows nothing, can verify nothing, shouldn’t be used as a source for anything. It is a text generator that sounds confident and mostly human and that is it.