Like almost all technological advancements, deepfakes started out as a force for good.

Or, rather, a force for entertainment: If you’ve seen the Stars Wars movie Rogue One, you’ve seen the technology used to make deepfakes. Using machine learning and artificial intelligence, deepfakes allow for a person’s head to be superimposed onto another person’s body in a moving video. Thus, a young Carrie Fisher’s face and body in Rogue One was morphed with that of another actress as part of that movie’s prequel scenes.

But what starts with a Star Wars or a Fast and the Furious movie here moves seamlessly into photoshopping a celebrity’s head onto a porn star’s body and then, let’s say, your head. Or the head of any woman whose image is readily available on the internet.

Terrifying, right? It’s a way of making revenge porn without ever needing to get your hand on actual explicit material of your victim. All you need is their image and an adult video that’s already been made, as well as the technological werewithall to put the two together.

This is the threatening new frontier of revenge porn in Australia, and you could be its next victim.

Deepfakes, in the revenge porn sense, began buried in a subreddit, where a user going by that name offered to take images of celebrities and paste them over the bodies of porn stars. Motherboard uncovered the story in December, 2017, using a deepfake incest porn video ‘starring’ Gal Gadot as their chillingly unnerving example. (Emma Watson, Taylor Swift and Meghan Markle have all had their images used in this way.)

The subreddit swelled to more than 100,000 followers before it was shut down by reddit proper, unceremoniously, in February of this year. Before them Twitter, Discord, Imgur and Pornhub had all banned deepfakes and other distorted and manipulated porn imagery.

But by this point it was too late

The technology that powers deepfakes had already leapt light years ahead since its inception, and face-swapping apps were readily available on most smartphones to “almost anyone with limited computer skills,” according to Dr Nicola Henry, associate professor at RMIT specialising in sexual violence. “Their motivations might be to cause humiliation and embarrassment, or to obtain sexual gratification as a form of entertainment.”

Very quickly, deepfakes also started to get convincing, really convincing. Early deepfakes have an air of the uncanny valley to them, faces stitched on top of things where they clearly don’t belong. But as machine learning has evolved, matching the video’s participant’s facial expressions with the new imagery at a shocking level of photorealism, it can sometimes be hard to tell the real from the fake.

Noelle Martin, an Australian victim of deepfake porn who had her images stolen and appropriated, was subjected to strangers commenting on the explicitly realistic images on dozens of different websites. She told A Current Affair that she felt “physically sick, disgusted, angry, degraded, dehumanised” by the experience.

“For many victims this is an invasion of privacy and violation of the right to dignity, sexual autonomy and freedom of expression,” says Dr Henry, of the particularly insidious threat posed by deepfakes.

In Australia there is some legal recourse for those creating and disseminating deepfakes. In South Australia, Victoria, NSW and the ACT “intimate” and “invasive” photographic and video-based sexual violence is a crime, punishable by up to four years in prison. These laws were introduced to combat revenge porn, but they also deal explicitly with deepfakes, too.

But there are a whole bunch of caveats here. Pressing charges is difficult if the abuser is an unidentifiable avatar, and even more difficult if they’re not based in Australia. It’s also difficult to discern what is real and what’s fake, which is why deepfakes are so damaging to victims and so hard to prosecute. There’s also the possibility of civil litigation, but Dr Henry says that this avenue is “extremely costly and time-consuming.” In the UK, which has just criminalised upskirt images, commenters are hoping that deepfakes are next to be outlawed.

What can you do if you find yourself a victim of deepfakes?

Dr Henry advises visiting the image-based abuse portal of the Australian Office of the eSafety Commissioner. Reporting the incident to police is also important. Similarly, go direct to the platform – Facebook, Twitter, WhatsApp – that these images are being shared on. “Generally, most sites will take action to remove any images of minors from their sites, and the type of action taken for adults will depend on the site,” Dr Henry explains.

Some platforms have banned deepfakes, but others are still to follow suit. Regardless, the technology used to detect them lags behind. According to users, Pornhub is still flooded with deepfakes despite officially outlawing them. In June, a startup called Truepic, which promise to weed out altered images, raised more than $8 million in funding. More technological platforms like this are needed, and fast, before real psychological damage is done to the victims of deepfakes who stumble across their face plastered over an explicit video on social media.

Humiliation, blackmail and control

There’s also potential here for deepfakes to be used as tools of humiliation, blackmail and control. In May, Indian investigative journalist Rana Ayyub found herself the victim of a deepfake video spread on social media as a response to a story that she had written, exposing corruption at the heart of the Indian government.

“The slut-shaming and hatred felt like being punished by a mob for my work as a journalist, an attempt to silence me,” Ayyub wrote in an op-ed on The New York Times. “It was aimed at humiliating me, breaking me by trying to define me as a ‘promiscuous’, ‘immoral woman’.”

This is the real threat of deepfakes: They can happen to anyone, anywhere and by anyone with a smartphone in their hand.