It does sound ludicrous…might be better off making a concave mirror so it reflects and intensifies the light…like a giant magnifying glass over an ant hill. Yeah…that’s how they should do it. Nothing gonna wrong with that.
It does sound ludicrous…might be better off making a concave mirror so it reflects and intensifies the light…like a giant magnifying glass over an ant hill. Yeah…that’s how they should do it. Nothing gonna wrong with that.
That means someone at meta thinks the evidence that will come in trial will cost them more than $1.4b
They fired 12 employees of a workforce numbering over 216,000. Looks like they fired 1000x more employees (literally…12000) last year just because “that’s business.” What a nothingburger.
AI isn’t giving the right misinformation
Same here. It’s good for writing your basic unit tests, and the explain feature is useful getting for getting your head wrapped around complex syntax, especially as bad as searching for useful documentation has gotten on Google and ddg.
What would be better is polluting the software with invalid but still plausible constraints, so the chips would seem OK and might work for days or weeks but would fail in the field… especially if these chips are used in weapon systems or critical infrastructure.
It’s a pretty big presumption that Elon Musk is providing transparent and accurate information to consumers about a technology he’s hoping to sell. While I’d agree with the premise normally, he’s kind of a known bad actor at this point. I’m a pretty firm believer in informed consent for this kinda stuff, I just don’t see much reason to trust Musk is willing to fully inform someone of the limitations, constraints or risks involved in anything he has a personal stake in. If you aren’t informed, you can’t provide consent.
You really don’t need anything near as complex as AI…a simple script could be configured to automatically close the issue as solved with a link to a randomly-selected unrelated issue.
Did you read the whole article? Newsweek misrepresented the results by leaving out other answers that clearly demonstrate the vast majority think Hamas is a terrorist organization and the Oct 7th attacks were terroristic and genocidal in intent. The sample size was far too small. You’ll notice they didn’t even tell you what the actual question asked was. There’s a big difference between “do you support Hamas” and “do you support the Palestinian government” or “do you support Palestinian efforts to defend against Israeli attacks?” Surveys in general, and especially ones on politically decisive ideas, are notoriously easy to skew based on subtle differences in how you word questions. I’d recommend you be very suspicious of any report on a survey that doesn’t tell you what was actually asked.
From a shit survey misquoted by a failed Republican sycophant. Echo chamber.
If you think that’s what’s happening, you’ve been in an echo chamber yourself.
Guess we’ll find out whether TikTok or reproductive freedom is more important…
Pizza Party Fridays (5 pm - midnight, hackathon and 1/2 off pizza (plus tax))
Attendance mandatory!
Totally replacing my snowblower with one of these
“Among other restrictions, US federal law controls the export of strong cryptographic materials, which are classified as a munition. Under these restrictions, the Fedora Project cannot export or provide Fedora software to any forbidden entity, including through the FreeMedia program”
You should let Fedora know that.
They ARE in force for exports…including software.
Weird that this one isn’t filled with a bunch of instructions to be an unbiased raging white supremacist conspiracy theorist.
Best I can do is a Predator for $3mil.
Kudos to the person (or, ironically, AI model) who chose that picture, though. Pepto Popsicle is the best euphemism for LLMs as AI that I can imagine.
I imagine the implementation would cost them more than the fine…