Deepfakes

Deepfakes, the next step in cybercrime. Why you should worry and how to protect yourself?

Deepfake AI: Why you should be afraid and how to protect yourself 

 Security has become the highest priority recently. Data leaks, hacker attacks, money thefts, and bank account cracking motivated us to focus on safety and look for ways to improve security. As quickly as new technologies and innovative tools are developed, the techniques employed by cybercriminals are also evolving with them.

 While companies are worried about data security vulnerabilities and adopt software to prevent them, a new threat—deepfake AI—was birthed a few years ago. The bad news is that not only companies, corporations, and celebrities should be wary but ordinary people as well. In this article, we’ll tell you what deepfakes are, how they can damage lives, and how to protect yourself from them.

Deepfake meme

What is deepfake AI?

 By definition, “deepfake” is a combination of two words—''deep learning" and ''fake". Simply put, deepfake is a technique of employing Artificial Intelligence to create realistic, fake alterations to photo, audio, or video content. Existing images, videos, and voice records are processed with machine learning algorithms to superimpose them onto the source content and produce a fictional result that appears real.

 The term appeared in December 2017 from a Reddit user with the username “deepfakes”. By replacing celebrity faces with faces of pornographic actresses with the help of AI, a Redditor made several fake adult videos with Gal Gadot, Maisie Williams, and Taylor Swift. 

 In February 2018, Reddit banned the r/deepfakes subreddit for sharing involuntary pornography. Other websites also joined the fight against AI-based fakes and started banning for this as well.

 Now, let’s look at how deepfakes are developed. The abuser uses two AI systems—generator and discriminator. Together, they make up a so-called generative adversarial network (GAN). The first one creates deepfake video examples and then the second one tries to determine whether they are real or not. 

 Based on the discriminator’s feedback on detecting fakes, the generator learns through errors and doesn’t make them in the next video. This way, each result is better than the previous one. 

Why should we care? Deepfake examples

 In March 2019, the managing director of a British energy company was deceived with the help of deepfake software and robbed of €220,000. An attacker created a video where his boss, the head of the parent company in Germany, demanded he make a money transfer to a supplier company from Hungary within an hour. Since the voice and face of the leader were totally imitated and even his German accent was the same, there was no obvious indication that something was wrong.

 As a result, all the money was lost. From the Hungarian account, it was sent to Mexico and then scattered around the world to their cover tracks. However, the thieves didn’t stop there. They then asked for a second urgent transfer so that “supplies from Hungary” would “go even faster.” This time the British director felt something was amiss and called the real boss.

 Such examples understandably cause anxiety. If people see Scarlett Johansson or some other celebrity in an adult video, they reasonably can suspect it’s untrue. But what about vice presidents, managing directors, chief executive officers, and others who may become victims of scammers? 

 Furthermore, now the creation of deepfakes with AI doesn’t require specific skills and knowledge. Abusers just need photos, videos, and machine learning algorithms to produce a fictitious result. There are also applications that enable them to produce realistic clips. The time is now upon us where we should not implicitly trust content we see in videos or take everything at face value.

The fight against deepfakes

 On September 5, 2019, Facebook announced in its official AI blog an upcoming Deepfake Detection Challenge (DFDC). Together with Microsoft, the Partnership on AI, and academics from world-famous universities, the company formed a coalition to fight against AI-based fakes. 

 The developers who come up with the best algorithms to instantly detect deepfakes will receive grants and awards. The total prize fund will be $10 million. The key goal of the challenge is to push the industry to create innovative technology that will allow everyone to easily identify fakes and avoid manipulation.  

 Over the past two years, the Defense Advanced Research Projects Agency (DARPA) spent $68 million on inventing solutions to detect deepfakes. Since 2016, the agency has been working with the Media Forensics (MediFor) team, which is engaged in development in this area.

 In August 2019, the University of Oregon announced studies aimed at combating deepfake photos, audio, and videos. The main purpose of the study is to test one of the most unusual ideas. A group of scientists is trying to teach mice to recognize differences in speech that are imperceptible to the human ear and then enable the machine to use this as a recognition mechanism.

 Measures against deepfake AI are also being undertaken at the country level. A few days ago, on October 7th, California Governor Gavin Newsom signed two new pieces of legislation designed to prevent harm associated with AI-based fictions.

 The first, named Assembly Bill 602, entitles victims of pornographic videos that comprise 96% of all deepfake content to sue the abusers. The aim of the second—Assembly Bill 730—is to protect politicians from career and reputation damages. 

 It prohibits the distribution of deepfakes of political candidates within two months (60 days) of the election period. Although this is not enough to ensure full protection of the law, it’s certainly a step in the right direction. We believe that soon other countries will also join the fight with new legislative measures.

Final words

 Emerging technologies can often be a double-edged sword. On one hand, Artificial Intelligence can provide a variety of benefits that include automation of business processes, minimization or elimination of manual work, detection of malware, protection from hacker attacks, and much, much more. On the other hand, it can spawn totally new, dangerous things like deepfakes that require new ways to fight against AI cybercrime. 

 At the moment, developers from all over the world are creating machine learning algorithms to tackle this problem. At our company, we are also very concerned about the appearance and distribution of deepfake AI and strongly recommend that you check messages that ask for instant money or sensitive data transfers. 

 If you have some thoughts or questions about this topic, you’re welcome to write your comments below. To learn more about machine learning and its concepts, you can also read our article “Black Box in Machine Learning: The Metaphor Explained”.

CONTACTS

Krakow

Head office

development@y-sbm.com

Rzemieślnicza 1/713 30-363 Kraków

+48 505 012 322

Contact us

© 2014-2019 All Rights RESERVED. YSBM Group sp. z o.o.

KRS: 0000512023 NIP: 6762476939

Privacy Policy