This might be the one time I’m okay with this. It’s too hard on the humans that did this. I hope the AI won’t “learn” to be cruel from this though, and I don’t trust Meta to handle this gracefully.
pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)
My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.
Next question is though, do you trust Meta to moderate? Nah, should be an independent AI, they couldnt tinker with.
This might be the one time I’m okay with this. It’s too hard on the humans that did this. I hope the AI won’t “learn” to be cruel from this though, and I don’t trust Meta to handle this gracefully.
pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)
I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.
But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.
My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.
Next question is though, do you trust Meta to moderate? Nah, should be an independent AI, they couldnt tinker with.