The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California’s top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.

Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year’s proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”

A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign “left the staff of the Southern California Air Quality Management District (SCAQMD) reeling,” the article says.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      2 days ago

      This was happening before AI, with less sophisticated tools, often called “Persona Management” that allowed one person to control numerous bots with pre-written scripts that could be called up depending on what was called for. The only difference the AI has made is the speed and scale at which the same can be done and be more convincingly not all culled from the same script.

      https://www.axios.com/2017/12/15/bots-flooded-the-fcc-with-comments-about-net-neutrality-1513307159

      Here’s an article about a flood of bot comments to an FCC open comment regarding Net Neutrality in 2017, five years before OpenAI would release ChatGPT. So it’s definitely been going on before the AI tools as they now exist were available. It’s a quantitative difference, not a qualitative difference, in other words it’s the same thing just larger scale due to the speed of AI.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        It does make it harder to find them, because the phrasing is similar, but not identical due to randomness.

        Whereas before, you could probably filter a good chunk of it out by just finding the same message/keywords and filtering by that.

      • GreenBeard@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Yeah, you can kill a man with a knife but you can do it a lot faster and easier with a nuclear warhead. People aren’t scared of an aggressive chihuahua, but they’ll have an aggressive pitbull put down. The scale and scope of damage matters.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Allow me to quote myself, from my initial comment in this thread, which was the first comment in this thread.

          The fines/prison time should be even more severe when AI generated messages are fraudulently being promoted as real humans, simply due to the industrial speed and scale AI generation allows.

          I know this, I made it clear why it’s a problem when nobody else had even commented in this thread yet… I was merely pointing out that this has been a growing problem for a long time before AI became part of it.