The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

At the recently-held 2024 Munich Security Conference, 20 leading technology companies signed a pledge "to work together to detect and counter harmful AI content" from interfering in democratic election being held globally this year, in which more than 4 billion people are expected to be voting.

Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok,
Trend Micro, Truepic, and X, signed the so-called “Tech Accord to Combat Deceptive Use of AI in 2024 Elections

The Accord, a "voluntary framework" has seven main goals:

  1. Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated.
  2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible.
  3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms.
  4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content.
  5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content.
  6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content.
  7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content.

With the creation of AI models capable of transforming text into a realistic video, the integrity of democratic elections taking place around the world in 2024, stands to be threatened. Four years ago, I wrote about 'deepfakes' in Indian elections, and how doctored videos could potentially alter the outcome of an election or even incite people to violence. Today's technology, just four years later, is much more powerful and poses the same risks. It's reassuring that technology companies have demonstrated a commitment to help defend election integrity, and now have a framework that could be followed.