Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.

It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.

A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    8 months ago

    people everywhere are being blindsided by an entirely predictable thing

    Tbh, considering how humanity collectively shit the bed in the face of a global pandemic, I am approximately 0% surprised that nobody got out in front of this one either.

  • Viking_Hippie@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    8 months ago

    Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,”

    Such demagoguery as the latter is the rule rather then the exception in many if not most countries in the world, the US included.

    With everything to gain and hardly ever anything to lose by doing so, knowingly misrepresenting the truth or outright lying is something that almost all successful politicians do.

    Until it’s disincentivized strongly by strengthening rules and actually enforcing existing ones, the trend of disinformation being the norm will only accelerate.

  • girlfreddy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    edit-2
    8 months ago

    “The novelty and sophistication of the technology makes it hard to track who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the deluge, nor are they moving fast enough to solve the problem.”

    And that’s the problem. When AI was first announced gov’ts should have put a few rules in place to safeguard citizens while gov’ts delved deeper. Instead, as always, they ignored the possible issues and let capitalism reign.

    • i_am_not_a_robot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      8 months ago

      When was AI first announced? What kind of rules could the government have put in place to effectively prevent people outside their jurisdiction from using freely available technology on their own computers?

      • girlfreddy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        When was AI first announced?

        In 2016, by Google.

        What kind of rules could the government have put in place to effectively prevent people outside their jurisdiction from using freely available technology on their own computers?

        That before the public had access to AI, the gov’t would gather unbiased experts together to disseminate what effects AI could/would have on politics, education, fake news, journalism, etc.

        It’s not hard for the gov’t to do their jobs. We just have to hold them to account.

        • piecat@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          General AI has been a philosophical concept and science fiction topic for decades. It’s been a goal since at least the 80s.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    This is the best summary I could come up with:


    LONDON (AP) — Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.

    As the U.S. presidential race heats up, FBI Director Christopher Wray recently warned about the growing threat, saying generative AI makes it easy for “foreign adversaries to engage in malign influence.”

    In Slovakia, another country overshadowed by Russian influence, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections.

    In Indonesia, the team that ran the presidential campaign of Prabowo Subianto deployed a simple mobile app to build a deeper connection with supporters across the vast island nation.

    Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.

    Associated Press writers Julhas Alam in Dhaka, Bangladesh, Krutika Pathi in New Delhi, Huizhong Wu in Bangkok, Edna Tarigan in Jakarta, Indonesia, Dake Kang in Beijing, and Stephen McGrath in Bucharest, Romania, contributed to this report.


    The original article contains 1,371 words, the summary contains 201 words. Saved 85%. I’m a bot and I’m open source!

  • rayyy@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    8 months ago

    Most intelligent and informed people will not be fooled by AI. The cult however will fall for what they are told to believe.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      I wouldn’t be so confident about intelligent and informed people. They too get duped by propaganda. When we are dealing with entirely realistic-looking videos and photos, and we have spent our whole lives being trained to trust photo and video evidence, it’s going to be hard not to fall for some of this disinformation.

      Some recent examples of AI deepfakes include:

      — A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.

      — Audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer.

      — A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.

      I wouldn’t trust myself never to fall for any of that just on the basis of “intelligence”, and I’m only partly dumb. It requires us to retrain a lifetime of habitual trust in recorded evidence.