As much as it is hard to believe, many of the faces that we see today on the Internet or on the street, and that are often used as an advertising resource on banners, posters, or food products, are faces generated by artificial intelligence. Technology is so advanced in this field that sometimes it is tremendously complex to know if a face is real or has been generated through machine learning.
Two expert researchers have published a study on faces generated by artificial intelligence. In their study, many of the participants were unable to recognize which of them was truly real.
Differentiating a human face from one generated by AI is increasingly difficult
Sophie Nightingale, from the Department of Psychology at Lancaster University, and Hany Farid, from the Department of Electrical Engineering and Computer Science in California, conducted research involving faces from real photographs and intelligence-generated images artificial. In their research, they concluded that today, ‘no one can tell the difference’.
The story of how a deepfake managed to damage the reputation of a soft drink company in Spain: these are the new cybercrimes
To carry out the study, they had the help of 315 participants who had to determine if the faces presented were real or not. In one part of the study, ‘false’ faces were identified only 48.2% of the time. In another of the tests, before getting down to work, the participants obtained some knowledge and help on the key points to identify faces. Although here they were correct 59% of the time, there was no major improvement in the results.
Another of the tests carried out was about scoring from 1 to 7 how reliable the face shown was. The participants selected as more ‘reliable’ those faces generated by AI. Although they showed more confidence about the faces that smiled, it should be noted that 65.5% of the real faces and 58.8% of the false ones smiled.
A technique to show the most reliable face
The AI-generated faces were created through the Generative Adversarial Network (GAN). In this technique, two neural networks face each other to show examples until the network trains itself and thus be able to generate increasingly better content. You start with a random array of pixels and gradually learn how to generate a face. A discriminator, an element used in the field of distributed computing, is responsible for learning to detect the face, and if it detects it, it penalizes the network that generates the face. So until little by little it ends up being indistinguishable for the discriminator, and therefore, for the human being.
Participants had to identify a total of 400 real images and 400 AI-generated images of people of different race and gender. Interestingly, in the study, male white faces were the least accurately classified.
The authors mentioned in the study that those dedicated to creating this type of technology should consider whether the benefits outweigh the risks. Currently there are great efforts to improve the detection of deepfakes and the like, such as the C2PA (Coalition for Content Provenance and Authenticity) endorsed by technology companies such as Adobe, Arm or Microsoft. Differentiating an artificial face from a real one is beginning to be really difficult for many, assuming a serious problem, especially for the proliferation of fake news. That is why there must be increasingly precise and complex detection systems.