Most of the world’s largest tech companies, including Amazon, Google and Microsoft, have agreed to tackle what they are calling deceptive artificial intelligence (AI) in elections.

The twenty firms have signed an accord committing them to fighting voter-deceiving content.

They say they will deploy technology to detect and counter the material.

But one industry expert says the voluntary pact will “do little to prevent harmful content being posted”.

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced at the Munich Security Conference on Friday.

The issue has come into sharp focus because it is estimated up to four billion people will be voting this year in countries such as the US, UK and India.

Among the accord’s pledges are commitments to develop technology to “mitigate risks” related to deceptive election content generated by AI, and to provide transparency to the public about the action firms have taken.

Other steps include sharing best practice with one another and educating the public about how to spot when they might be seeing manipulated content.

Signatories include social media platforms X – formerly Twitter – Snap, Adobe and Meta, the owner of Facebook, Instagram and WhatsApp.

Proactive

However, the accord has some shortcomings, according to computer scientist Dr Deepak Padmanabhan, from Queen’s University Belfast, who has co-authored a paper on elections and AI.

He told the BBC it was promising to see the companies acknowledge the wide range of challenges posed by AI.

But he said they needed to take more “proactive action” instead of waiting for content to be posted before then seeking to take it down.

That could mean that “more realistic AI content, that may be more harmful, may stay on the platform for longer” compared to obvious fakes which are easier to detect and remove, he suggested.

Dr Padmanabhan also said the accord’s usefulness was undermined because it lacked nuance when it came to defining harmful content.

He gave the example of jailed Pakistani politician Imran Khan using AI to make speeches while he was in prison.

“Should this be taken down too?” he asked.

Weaponised

The accord’s signatories say they will target content which “deceptively fakes or alters the appearance, voice, or actions” of key figures in elections.

It will also seek to deal with audio, images or videos which provide false information to voters about when, where, and how they can vote.

“We have a responsibility to help ensure these tools don’t become weaponised in elections,” said Brad Smith, the president of Microsoft.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Time travel: What if you met your future self?

By Hal Hershfield15th November 2023 Imagining a conversation with “future you” has…

OpenAI staff demand board resign over Sam Altman sacking

By Chris Vallance, Annabelle Liang & Zoe Kleinman Technology and business reporters…

Israel Gaza: UN warns attack on Rafah could lead to ‘slaughter’

A top UN official has warned an Israeli assault on Rafah, Gaza’s…

Five reasons for optimism on climate

By Matt McGrath Environment correspondent at COP28 in Dubai It’s easy to…