• 0 Posts
  • 85 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle



  • I don’t agree that having multiple definitions for something makes it subjective, what it makes it is vague. If you provide one one of those definitions to someone and ask them if something meets it (and for the sake of argument they have full knowledge of how it was created) they should always be able to come to the same conclusion. As I understand it, and the definitions you provided, what makes something subjective is whether it will be unique to the person evaluating it. If my definition of good art is it makes ME feel something, somebody else could look at the same thing I do and come to a different conclusion. You couldn’t build a model that filters out bad art based on that subjective definition. All I’ve been trying to say is that whether something is AI is something that is definable but apparently I’m being too fucking stupid to make that clear.



  • Yeah I’m basically ignoring the part of implementing it as a separate issue from defining it, which is the part I’m saying is objective. Given a definition of what type of content they want to ban you should be able to figure out whether something you’re going to post is allowed or not, that’s why I’m saying it’s not subjective. Whether it can be detected if you post it anyways, would probably have to be based on reports, human reviewers and strict account bans if caught, with the burden of proof on the accused to prove it isn’t AI to have any chance of working at all. This would get abused, and be super obnoxious (and expensive) but it would probably work to a point.



  • The criteria is whatever you put in the “no ai” policy on the site. Whether that be ‘you can’t post videos wholly generated from a prompt’ to ‘you can’t post anything that uses any form of neural net in the production chain’ to something in between. You can specify what types are and are not included and blanket ban/allow everything else. It can definitely be defined in the user agreement, the part that’s actually hard would be detection/enforcement.









  • Sure, if they could actually deliver on the hype they’re eating out of their own ass that would be one thing, I just think they are fundamentally misrepresenting the capabilities of our current AI technology and barely able to deliver an operating system that works without trying to retrofit the entire thing for their bloated chatbots. Unless there’s a huge breakthrough in AI research that allows actual reasoning AND microsoft manages to actually become competent I don’t see how the end of this route is anything but a steaming turd. My money is on the bubble bursting and Microsoft “refocusing on core competencies” before getting distracted again by the next bad idea.



  • Developing the robots is mostly up front costs (between paying someone to “code” the thing and then buying the units) vs paying however many employees forever which is why I say it’s more expensive.

    If they could make an actual genai I think it could help with loneliness but we won’t be there anytime soon, if ever, the way they keep following a strategy of scaling and optimizing a speech center.