When the Deep Fake technology first appeared in early 2018, it was used to put famous faces on the body of porn performers and produce reasonably convincing videos.

But some fear that Deep Fakes will soon serve a much darker agenda.

"There's going to be a big wave of Deep Fakes coming our way," said Fabrice Pothier, a spokesman for the Transatlantic Commission on Election Integrity that was set up to combat the growing amount of interference in regional and national elections.

Backed by former US vice-president Joe Biden and a raft of former politicians and senior figures from Nato and other bodies, the commission plans to produce tools to help elections progress without interference.

One tool will target Deep Fakes - especially those made to put words in the mouths of politicians or other public figures involved with elections.

Platform patrol
Time is running out to develop such tools. said John Gibson, from ASI Data Science which has been advising the commission about ways to spot Deep Fake videos.

"It is probable to almost certain that within, say, a couple of years, basically anyone with a bit of tech smarts will be able to create highly persuasive video or audio of more or less anyone in the public domain saying or doing more or less what they want on a video and then disseminate it," he said.

ASI was called in because of its success in making tools to automatically spot videos made by the Islamic State group being spread on social media.

Those well-produced "official" videos were key to the radicalisation of many people who carried out "lone wolf" attacks in London and other cities, said Mr Gibson.

"There are particular classes of video that cause the real damage," he said. "They are slick and well-produced.

"The quality of the content matters because you can start to persuade people that are sceptical. These are so troubling because they are so visceral."

As the Deep Fake technology improves, it might be used to generate the convincing clips that can significantly damage debate and undermine legitimate elections.

Big web platforms such as Facebook and YouTube did a lot of their own work to find and flush out IS propaganda, said Mr Gibson, but smaller firms need help to scrutinise the huge amount of video flowing online. The same will be true of Deep Fakes.

Systems based around machine learning and AI can do the job of finding content and processing video far faster than humans can, he said.

Research suggests that the IS videos appeared online via more than 400 different platforms, said Mr Gibson. The Deep Fakes are likely to be uploaded through at least as many routes.

"If you are spreading fake news it does not matter to you where it is, it's not like you get more status if it's on YouTube," he said. "You just want people to look at it.

"As long as it is on the open web and as long as you can cut and paste a link to it in a message the job is basically done," he told the BBC.

Fighting fakes
There have already been efforts to combat election interference, most particularly during Mexico's recent presidential election.

"Mexico has a long history of social network manipulation that goes way back," said Tom Trewinnard, director of programmes at media firm Meedan which helped to run a project to combat fake news and disinformation in the country called Verificado.

The electronic disruption intensified during the 2018 election. One of the most public examples took place during the final television debate between presidential candidates on 12 June.

During the debate, Ricardo Anaya, of the National Action Party, revealed that its website was making public some documents that criticised leading candidate Andres Manuel Lopez Obrador.

While the debate was under way, the site was hit by a sustained cyber-attack and was knocked offline for hours.

Other interference included hashtag poisoning on Twitter.

This, said Mr Trewinnard, involves a campaign flooding Twitter with posts related to a trending tag that supports a rival.

"That triggers Twitter's spam filters which kills the hashtag from the trending feed," said Mr Trewinnard.

Some attacks were much less sophisticated. One claimed to reveal that a candidate was not born in Mexico but failed to change the basic data on the birth certificate image which was copied, making the target more than 100 years old.

Verificado brought together 90 separate organisations including major media groups, universities and civil organisations to find, investigate and debunk election-related material, said Mr Trewinnard.

The media groups investigated disinformation and published rejoinders if it was found to be fake. It published hundreds of separate items which were shared millions of time and often helped dilute the impact of the fakes.

It was clear, he said, that there was an appetite in Mexico for verified news sources.

It is less clear whether Verificado worked, said Mr Trewinnard, but as the project ended there were loud calls from across the political spectrum for it to continue because the problem of fake news in Mexico has not gone away with the election of Mr Obrador.

But what has become obvious is why so many groups are keen to indulge in such large-scale disinformation campaigns, said Toby Abel, from AI firm Krzana which helped Verificado scour sources for news.

There was a lot of information to check, said Mr Abel, and though some of it was "laughable nonsense" there were still good reasons for putting it online and trying to get it widely shared.

The reason, he said, was to poison debate generally and undermine people's faith in anything and everything they saw.

"The insidious danger of mass amounts of fake news is that we don't know what to believe," he said. "I don't think we have yet got to a point where we know how to handle this."