Lee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 2 years agoReddit's licensing deal means Google's AI can soon be trained on the best humanity has to offer — completely unhinged postswww.businessinsider.comexternal-linkmessage-square249linkfedilinkarrow-up11.03Karrow-down114
arrow-up11.02Karrow-down1external-linkReddit's licensing deal means Google's AI can soon be trained on the best humanity has to offer — completely unhinged postswww.businessinsider.comLee Duna@lemmy.nz to Technology@lemmy.worldEnglish · 2 years agomessage-square249linkfedilink
minus-squareFaceDeer@kbin.sociallinkfedilinkarrow-up7arrow-down1·2 years agoNegative examples are just as useful to train on as positive ones.
minus-squareMelodiousFunk@startrek.websitelinkfedilinkEnglisharrow-up9arrow-down1·2 years agoThat’s what she said.
minus-squareRustmilian@lemmy.worldlinkfedilinkEnglisharrow-up7arrow-down1·edit-22 years agoThe AI is either going to be a horny, redpilled, schizophrenic & sociopathic, egomaniac that wants to kill everyone and everything or a devout, highly empathetic, Nun that believes in world peace and diversity.
minus-squarewise_pancake@lemmy.calinkfedilinkEnglisharrow-up1·2 years agoThey’ll tell it to polite, helpful, and always be racially diverse, so there’s no way it can be any of those things.
minus-squareRustmilian@lemmy.worldlinkfedilinkEnglisharrow-up4·edit-22 years agoThat heavily depends on how well they train it and that they don’t make any mistakes. Consider the true story of ChatGPT2.0.
minus-squarewise_pancake@lemmy.calinkfedilinkEnglisharrow-up2·2 years agoI’ll have to look at that later, that video sounds promising! I was just joking because the default prompts don’t magically remove bias or offensive content from the models.
minus-squareOpenStars@startrek.websitelinkfedilinkEnglisharrow-up2arrow-down2·2 years agopor que no los dos? Life…ah, finds a way.
Negative examples are just as useful to train on as positive ones.
That’s what she said.
The AI is either going to be a horny, redpilled, schizophrenic & sociopathic, egomaniac that wants to kill everyone and everything or a devout, highly empathetic, Nun that believes in world peace and diversity.
They’ll tell it to polite, helpful, and always be racially diverse, so there’s no way it can be any of those things.
That heavily depends on how well they train it and that they don’t make any mistakes.
Consider the true story of ChatGPT2.0.
I’ll have to look at that later, that video sounds promising!
I was just joking because the default prompts don’t magically remove bias or offensive content from the models.
por que no los dos?
Life…ah, finds a way.