more on deep fakes
After deepfakes, a new frontier of AI trickery: fake faces Images generated by an algorithm are appearing increasingly in bot campaigns and online news outlets The use of AI-generated faces means that networks of real-seeming social media accounts have been created to spread misinformation © FT montage Share on Twitter (opens new window) Share on Facebook (opens new window) Share on LinkedIn (opens new window) Siddharth Venkataramakrishnan October 12 2020 40 Print this page “Alfonzo Macias” looks unremarkable at first glance — bearded, bespectacled, with a short widow’s peak. But his strangely distorted glasses and the dissolving background behind him hint at a discomforting truth: Mr Macias never existed. Undetectable to the naked eye, the uncannily human face is in fact the creation of an algorithm — one used by pro-Trump media outlet TheBL to give an identity to one of the many fake Facebook accounts that it uses to drive traffic to its website. While less attention-grabbing than the viral deepfake videos that have manipulated the speech and actions of politicians and celebrities to popular effect in recent years, static artificial intelligence-generated faces are becoming an increasingly common tool for misinformation, experts say. Instead of making real people appear to say and do things they have not, the technique works by generating entirely “new” people from scratch. Already, fake faces have been identified in bot campaigns from China and Russia, as well as in rightwing online media outlets and purportedly legitimate businesses. Their proliferation has led to concerns that the technology could represent a more ubiquitous and pressing threat than deepfakes, as online platforms grapple with a rising tide of misinformation ahead of the US election. The report from Graphika and the Atlantic Council’s Digital Forensic Research Lab into fake identities, showing tell-tale signs that the profile image for Alfonzo Macias is a deepfake The report from Graphika and the Atlantic Council’s Digital Forensic Research Lab into fake identities, showing tell-tale signs that the profile image for Alfonzo Macias is a fake “A year ago, this was a novelty,” tweeted Ben Nimmo, director of investigations at social media intelligence group Graphika. “Now it feels like every operation we analyse tries this at least once.” The face race Like deepfakes, AI-generated faces are created using a technology known as GANs, or generative adversarial networks. One network generates content, while another compares it to human faces, forcing it to improve until it cannot distinguish the synthetic image from a real face. Digital renderings of fictional humans have had a growing presence online in recent years, with stars such as the virtual popstar, model and activist Miquela drawing in vast followings on Instagram and Twitter. But what sets GAN-generated faces apart is their photorealism — the level of detail that gives a strange lifelikeness to the characters. “The most recent GAN models [such as Nvidia’s popular StyleGAN2] can now be used to create highly realistic synthetic images of human faces, down to the minuscule details — in particular, skins and hair,” said Siwei Lyu, a professor in computer science at the University at Albany, State University of New York. ThisPersonDoesNotExist, a website that creates a StyleGAN2 face each time it is refreshed, demonstrates how convincing such images can be. The technique is also not limited to human faces, however, with dozens of variants ranging from cars to cats. While concerns over AI-powered misinformation had focused largely on political deepfakes, a substantial case was yet to materialise, said Henry Ajder, a researcher who specialises in deepfakes and synthetic media. “There hasn’t been the kind of [Donald] Trump waving the nuclear red button around.” However instances of GAN-generated fake faces used for deception have been appearing since last June, when the Associated Press identified an account on LinkedIn masquerading as a think-tank employee. Larger-scale use of the technique was first identified in December, when Graphika and the Atlantic Council’s Digital Forensic Research Lab released a report on a network of over 900 pages, groups and accounts linked to the rightwing news outlet Epoch Media Group. “They used these fake faces to bolster their Facebook presence and deliver their messages to a wider audience,” said Max Rizzuto, research associate at the DFR Lab. A montage of faces generated by thispersondoesnotexist.com All of these faces were generated by thispersondoesnotexist.com. Meanwhile nation states have also spotted the technology’s potential, with Graphika discovering dozens of GAN-generated faces used in campaigns linked to China and Russia. In the case of China, GAN-generated images were used as profile pictures in a Facebook campaign, with fake accounts pushing pro-Beijing talking points on subjects including Taiwan, the South China Sea and Indonesia. By contrast, the Russian campaigns had used fake faces to create the personas of fictional editors behind divisive political news outlets. Giorgio Patrini, chief executive of deepfake detection platfor...
from the financial times
all fake