software uk criminalize the sharing of deepfake porn
Software

New UK Law to Combat Harmful AI Pornography

Online Safety Bill
Image Credits: UK GOV

The U.K. is to outlaw the sharing of deepfake pornographic images and videos without consent in a new Online Safety Bill. Under the Online Safety Bill, people who share pornographic images without consent, that have been manipulated by AI or other software to remove clothing or replace faces in images and videos, could now face time behind bars.

The bill is slightly ambiguous in its wording as, although it outlaws the sharing of deepfakes, it does not mention the AI and machine learning technology essential to making deepfakes but instead mentions ‘editing software’. The bill reads:

The package of reforms follows growing global concerns around the abuse of new technology, including the increased prevalence of deepfakes. These typically involve the use of editing software to make and share fake images or videos of a person without their consent, which are often pornographic in nature.

It’s also not clear how the bill will really protect people as the technology behind pornographic deepfakes is not banned and the sites that allow you to upload and manipulate images using AI are not outlawed.

In June 2019, a Windows and Linux application that used generative adversarial networks, a common machine learning framework, to remove clothing from images was released. Later that month it was removed, although copies of the application are still available for free online.

An ongoing fight against illegal pornography

In the past few years, there have been many attempts to curb the problem of deepfake porn, a problem that disproportionally affects women. In 2018, Google added “involuntary synthetic pornographic imagery” to its ban list.

Allowing people to request that the search engine blocks results that show them in faked sexually explicit images and videos. Also, in June 2022, Google banned the creation of deepfakes from its machine learning collaboration program, Colab.

A spokesperson from Google told Vice

We regularly monitor avenues for abuse in Colab that run counter to Google’s AI principles, while balancing supporting our mission to give our users access to valuable resources such as TPUs and GPUs. Deepfakes were added to our list of activities disallowed from Colab runtimes last month in response to our regular reviews of abusive patterns.

Scarlett Johansson, a common subject in deepfake porn, said this to the Washington Post when asked about deepfakes and the law:

I think it’s a useless pursuit, legally, mostly because the internet is a vast wormhole of darkness that eats itself. There are far more disturbing things on the dark web than this, sadly.

The problems that the development of AI brings to society are complex and hard to fully understand. On the one hand, stronger AI helps us solve problems that we cannot solve without AI, and on the other, stronger AI creates problems that we don’t know how to solve.

James Capell

Technical editor and journalist. I have a particularly strong interest in NLP, AI ethics and cyber crime. Not too fond of cats.