Cyber CrimeInformation Security News

Deepfakes: Landscape, Threats, and Impacts

Recently the company from Amsterdam Deeptrace Labs published the report of Deepfakes, which explores DeepFakes circulated on websites, forums, and mobile applications.

Although many are worried that DeepFakes will be used to manipulate elections and create chaos in different countries, so far the vast majority of them are much more “juvenile”: for example, portraying the faces of female celebrities on the bodies of pornstars (which should not surprise anyone who familiar with the history of the internet).

As Giorgio Patrini of Deeptrace Labs explains, the study began back in 2017, immediately after the company was founded, and consisted of a comprehensive check of sites, forums, and services. Researchers combed posts in the community for the creation of deepfakes, identifying obscure and niche elements to get a complete picture of the ecosystem. Deeptrace collected data from sites, forums and communities using publicly available APIs and special tools developed by the company. As part of the study, the company also looked at sites and YouTube channels, where not all content was likely related to DeepFakes.

Deeptrace Labs found 14,678 DeepFakes videos, 96% of which are pornography. On most videos, the faces of famous actresses with the bodies of pornstars. Indeed, most of the victims of DeepFakes are women, while non-pornographic videos are mostly men.

“Non-Western stories appear in almost every third video on deep-pornography sites, the heroes of a quarter of these videos are South Korean K-pop singers,” says Patrini. “This suggests that deepfake pornography is becoming an increasingly global phenomenon.”

More than 90% of DeepFakes videos on YouTube were Western celebrities – from representatives of creative professions to politicians and top managers of corporations. But Patrini emphasizes that this is not only a Western phenomenon. In the past few years, there has been an explosion in the amount of work on generative adversarial networks (GAN). GANs are two neural networks — a synthesizer or generator and a detector — that create deepfakes images or videos, and then improve the quality of the product using a feedback mechanism. Patrini says GANs are certainly among the “most popular and effective generative methods based on deep learning.”

Studies by Deeptrace Labs show that a noticeable increase in the volume of publications on GAN could only be indirectly associated with deepfakes. Deeptrace could not establish a direct causal relationship between the increase in the number of studies on GAN and deepfakes.

“More and more people, not just Ph.D. candidates, can experiment with algorithms and come up with their new variations, [and] we can give an indirect assessment of this by the increase in the number of articles,” says Patrini. The publication of such experimental works and the emergence of new options for using the technology suggests that these ideas can be turned into reusable code and more reliable and effective tools that will be understood even by inexperienced specialists. ”

“I would not say that these options for using deepfakes are positive, but they are not harmful by default,” says Patrini. And since deepfakes are not just a Western phenomenon, Deeptrace believes that global action will be required to counter malicious use.

“deepfakes provide significant business opportunities, as evidenced by the number of different tools and sites that have become available,” says Patrini, noting that this also contributes to the commodification of these tools. “And just the idea of deepfakes is enough to destabilize political processes.”

Show More

Related Articles

Back to top button
Close
Close