

It did the same for me but I took it as an opportunity to wisely unsubscribe from all the crap in my gmail


It did the same for me but I took it as an opportunity to wisely unsubscribe from all the crap in my gmail


Honestly, at this point I don’t care enough any more to defend a throwaway comment I made so congratulations, you won, I’m an idiot I guess so I’ll go worship a charlatan or something.


I don’t agree that having multiple definitions for something makes it subjective, what it makes it is vague. If you provide one one of those definitions to someone and ask them if something meets it (and for the sake of argument they have full knowledge of how it was created) they should always be able to come to the same conclusion. As I understand it, and the definitions you provided, what makes something subjective is whether it will be unique to the person evaluating it. If my definition of good art is it makes ME feel something, somebody else could look at the same thing I do and come to a different conclusion. You couldn’t build a model that filters out bad art based on that subjective definition. All I’ve been trying to say is that whether something is AI is something that is definable but apparently I’m being too fucking stupid to make that clear.


Whether AI art is good is subjective, it will change based on the whims of who you ask and cannot be defined. Whether something is AI generated depends on what definition you use but given a definition it either fits it or it doesn’t. It’s not subjective it’s just a little broad. As far as it being hard to detect that has no bearing on whether it is or isn’t AI.


Yeah I’m basically ignoring the part of implementing it as a separate issue from defining it, which is the part I’m saying is objective. Given a definition of what type of content they want to ban you should be able to figure out whether something you’re going to post is allowed or not, that’s why I’m saying it’s not subjective. Whether it can be detected if you post it anyways, would probably have to be based on reports, human reviewers and strict account bans if caught, with the burden of proof on the accused to prove it isn’t AI to have any chance of working at all. This would get abused, and be super obnoxious (and expensive) but it would probably work to a point.


Every single comment I said that detecting them would be the hard part, I’ve been talking about defining the type of content that is allowed/banned not the part where they actually have to filter it.


The criteria is whatever you put in the “no ai” policy on the site. Whether that be ‘you can’t post videos wholly generated from a prompt’ to ‘you can’t post anything that uses any form of neural net in the production chain’ to something in between. You can specify what types are and are not included and blanket ban/allow everything else. It can definitely be defined in the user agreement, the part that’s actually hard would be detection/enforcement.


Every single one of those I’d put under the second category. It’d be hard to detect but it’s certainly not subjective. It just depends on how it’s written.


Detecting it is difficult but what actually is or isn’t AI should be pretty cut and dry. Either nothing completely generated, or no footage edited using generative ai (depending on how strict you want to be with your ban)


You know that mpv is what plex actually uses to play the content right? At least on desktop. A lot of my users (almost all actually) are watching things via the smart tv app or their phones/tablets etc. Watch states are tracked between them and for the most part it just works.
Surely it’s not DNS


This just seems like Cloudflare testing something that the CAs will eventually be running themselves, as opposed to them trying to supplant the CAs or something.


I’m not arguing for that some people, this sounds like a good thing. I’m just imagining said older-relative types asking something like “check my emails” and the AI tries to open their unconfigured outlook lite or whatever and says “I’m having trouble doing that”. I’m sure if it works they’d be all over it, I just don’t believe it will.


Sure, if they could actually deliver on the hype they’re eating out of their own ass that would be one thing, I just think they are fundamentally misrepresenting the capabilities of our current AI technology and barely able to deliver an operating system that works without trying to retrofit the entire thing for their bloated chatbots. Unless there’s a huge breakthrough in AI research that allows actual reasoning AND microsoft manages to actually become competent I don’t see how the end of this route is anything but a steaming turd. My money is on the bubble bursting and Microsoft “refocusing on core competencies” before getting distracted again by the next bad idea.


I can’t imagine people would think this is a good idea past the point that they actually have to use this to get anything done, the best would be huffing copium thinking that the part where it gets good is right around the corner
Developing the robots is mostly up front costs (between paying someone to “code” the thing and then buying the units) vs paying however many employees forever which is why I say it’s more expensive.
If they could make an actual genai I think it could help with loneliness but we won’t be there anytime soon, if ever, the way they keep following a strategy of scaling and optimizing a speech center.
I mean it’s definitely more expensive to pay someone for their time assuming you’d presumably have an army of these to replace with people. But it’d also be am actual solution instead of whatever fresh hell this is
Every time you finally start to fall asleep and snore a little too loudly: “Sorry, I didn’t catch that.”
You can usually unsubscribe from the marketing emails and still get the tracking notifications if you don’t want to wait
Edit: Also the ‘wisely’ was supposed to be ‘finally’ haha autocorrect needs to lay off the whiskey