GANs or ‘antagonistic generative networks’ are artificial intelligence technology Responsible for the services of generation of ultra-realistic fake faces like the popular This Person Does Not Exists.

To generate these false faces, the neural network behind these services had to be previously trained with thousands or millions of photos of real faces, to be able to ‘know’ what human faces look like, and thus be able to later ‘imagine’ how one might look, which is what happens every time we load a website like This Person Does Not Exists.


However, these artificially generated faces and that, thanks to the large number of data used in the training of neural networks, should show intermediate facial features between numerous ‘models’ on many occasions contain data from the real faces of the training dataset, allowing them to be reconstructed.

This is what a group of researchers from the Caen Normandy University (France) have shown in an article ironically dubbed ‘This Person (Probably) Exist’, and which is the latest in a series of investigations focused on challenge the idea of ​​neural networks ‘as black boxes’ whose thought process cannot be reconstructed and understood by humans.

They have been so successful in this last task that they have been able to recreate training images ‘rewinding’ the process of a GAN from one of the images generated:


Original training data (top) and its reconstruction from generated deepfakes (bottom).

In short, this shows that personal data (yes, biometric data too) can still be present in AI-generated deepfakes (and not just image deepfakes)

But, aside from that, what they have shown is that, using a technique called ‘membership attack’, it is possible to know if a certain piece of data (such as a photo of a person) has been used to train an AI, all using the subtle differences in the way that AI processes photos it already knows and those that are presented to it for the first time.

Read:  How to activate and use Bitlocker protection without TPM in Windows 10

But the researchers were one step further, and combining this technique with a facial recognition AI, they were able to know if a certain AI had been trained with photos of a certain person, although the photo that was being provided to the AI ​​had not been used exactly for its analysis. Thus, they discovered that numerous faces generated by GANs seemed to match the facial features of real people.

Webster

The left column in each block shows deepfakes generated by a GAN. The next three faces are photos of real people identified in the training dataset.

ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS

What does this discovery mean?

And this discovery raises serious privacy concerns, given that this technique can be applied to any data (not just photographs of the face), this opens the door, for example, to discover if someone’s medical data had been used to train a neural network associated with a disease, revealing that person as a patient.

Furthermore, our mobile devices are increasingly using AI intensively, but due to battery and memory limitations, models are sometimes only half processed on the device itself and sent to the cloud for processing. Ultimately, an approach known as ‘split computing’. This is done because it is assumed that such a technique will not reveal any private data, but this new ‘membership attack’ shows that this is not the case.

Fortunately, knowing that also has two positive uses:

  1. It allows discover if your image (or that of one of your audiovisual works) has been used without your permission to train a neural network.

  2. It will allow, eventually, create safeguards in GANs to ensure that the images they generate are adequately anonymized.

Via | MIT Technology Review