Microsoft CEO calls for tech industry to ‘act’ after AI photos of Taylor Swift circulate X::Satya Nadella spoke to Lester Holt about artificial intelligence and its ability to create deepfake images of others. After pictures of Taylor Swift circulated, he called for actions

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    “Allowing entities other than us to control AI is dangerous. We must act!”

    – Microsoft probably

    I have no problem using the law to stop abusive deep fakes, but I do have a problem using the law to take AI away from regular people. Regular people need to be able to run their own AIs. All the worst outcomes involve taking AI away from regular people first.

  • Cosmicomical@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Isn’t this something that could have been done with photoshop in 30 minutes? What’s the difference when the result could have been almost perfect just as easily?

    Ps. Haven’t seen the images being discussed, and this is even more alarming given legislation could be passed based on images you’re not even morally allowed to review. It could all be fictional and I would never even know.

  • bloopernova@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    "TAYLOR SWIFT WAS A LINE THAT SHOULD NOT HAVE BEEN CROSSED.

    PREPARE TO REAP THE WHIRLWIND!"

    – The Whitehouse, apparently.

    • xor@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 years ago

      i will never understand how taylor swift became this super duper billionaire royalty who i have to hear about every day now…

  • EfreetSK@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    AI generation can be used for disinformation which can literally destabilize or right away end the world as we know it.

    But fake Taylor Swift pictures, this is where we draw the line …

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    Didn’t we all see this coming? Porn deepfakes were already a thing, and even before generative AI we already had people photoshop women in explicit situations.

    I’d even say that right now we have much better tools to deal with the fakes than before AI, and all that is required is legislative action.

    The tech is already capable of doing automatic facial recognition at scale and we could give victims the tools to automatically send take-down notices and have them enforced.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    I think this kind of stuff should be treated as revenge porn, and Twitter should absolutely get sued for letting it go on.

    An other article mentioned it was up for 17 hours and had 45 million view. I find it hard to believe twitter didn’t know about it 15 minutes after it was posted. We also have the tech to know when an image is NSFW and when it includes certain celebrities.

    That being said, it shouldn’t matter if it was generated, photoshopped or drawn. This is going to get muddied and Microsoft absolutely has a horse in the race.

    Distribution must be legislated, not how it’s created. Microsoft and company want to cut our access to free AI and replace it with a subscription model. They are banking on an emotional response.