MCasq_qsaCJ_234@lemmy.zip to Technology@lemmy.worldEnglish · 21 days agoAI is learning to lie, scheme, and threaten its creators during stress-testing scenariosfortune.comexternal-linkmessage-square22linkfedilinkarrow-up1156arrow-down156
arrow-up1100arrow-down1external-linkAI is learning to lie, scheme, and threaten its creators during stress-testing scenariosfortune.comMCasq_qsaCJ_234@lemmy.zip to Technology@lemmy.worldEnglish · 21 days agomessage-square22linkfedilink
minus-squareRickRussell_CA@lemmy.worldlinkfedilinkEnglisharrow-up1·17 days agoI don’t necessarily disagree with anything you just said, but none of that suggests that the LLM was “manipulated into this outcome by the engineers”. Two models disagreeing does not mean that the disagreement was a deliberate manipulation.
minus-squarekromem@lemmy.worldlinkfedilinkEnglisharrow-up2·3 days agoI’m definitely not saying this is a result of engineers’ intentions. I’m saying the opposite. That it was an emergent change tangential to any engineer goals. Just a few days ago leading engineers found model preferences can be invisibly transmitted into future models when outputs are used as training data. (Emergent preferences should maybe be getting more attention than they are.) They’ve compounded in curious ways over the year+ since that happened.
I don’t necessarily disagree with anything you just said, but none of that suggests that the LLM was “manipulated into this outcome by the engineers”.
Two models disagreeing does not mean that the disagreement was a deliberate manipulation.
I’m definitely not saying this is a result of engineers’ intentions.
I’m saying the opposite. That it was an emergent change tangential to any engineer goals.
Just a few days ago leading engineers found model preferences can be invisibly transmitted into future models when outputs are used as training data.
(Emergent preferences should maybe be getting more attention than they are.)
They’ve compounded in curious ways over the year+ since that happened.