Anyone can be such a criminal. The expert admits: We have become helpless
– Today, cybercriminals only need a few to a dozen or so seconds of recording someone’s voice to generate any utterance and – even worse – with a synchronized face, lip movement and matching facial expressions (…). Cybercriminals’ money has allowed us to improve technology in such a way that we have become helpless against it – explains Dr. Eng. in an interview with “Wprost”. Karol Jędrasiak from the Technology Transfer Center of the WSB University in Dąbrowa Górnicza.
Krystyna Romanowska, “Wprost”: The editorial office of “Wprost” fell victim to deepfake. Cybercriminals generated a video in which journalist Paulina Socha-Jakubowska together with Silver TV star Beata Borucka, known online as Mądra Babcia, praise a high blood pressure medicine for seniors in the “Wprost Opnie” podcast. Even though such an event never took place, the film and the voices seemed authentic. What might the process of making such a film look like?
Dr. Eng. Karol Jędrasiak: Today, cybercriminals only need a few to a dozen or so seconds of recording someone’s voice to generate any utterance and – even worse – with a synchronized face, lip movement and matching facial expressions. Noise-free, good-quality recordings such as podcasts or studio recordings are best for this purpose. There are also tools that allow you to add any body movements.
Such a deepfake was probably created in a few minutes based on the original material, i.e. a podcast, plus a quick review of photos on the Internet. The photos could also be cut directly from the material. In addition, there was a soundtrack, which was a voice cloning using an appropriate algorithm.
Deepfake does not have to be limited only to the so-called talking face, you can dance, sing and fight in this video. Unfortunately, the possibilities are virtually limitless.
The trend of producing deepfakes has been developing rapidly over the last few or a dozen years, and the number of cyberattacks increases by one or two thousand percent every year. Estimates are that the number of attacks will increase. Just provide the algorithm with a photo and it – trained on millions of faces – is able to create any virtual avatar.
Who is most at risk from these types of attacks?
Your example shows that journalists and celebrities, but of course also politicians, all manage big money: presidents, company directors, local government officials, military officials, representatives of uniformed services, people of public trust.
Interestingly, deepfake was originally a bona fide technique. It was intended to be used to realistically generate images and voices and construct films useful to people, e.g. virtual avatars to help in everyday contacts with a computer. Unfortunately, the evolution of this technique has turned to the dark side.
Ever since she started crawling, the great people of this world have been her victims; the first high-profile deepake involved President Barack Obama. Mark Zuckerberg and others were also attacked. A very high-profile case concerned the Prime Minister of Belgium, Alexander De Croo – based on one of his statements, a false statement was generated in which he linked COVID-19 with the climate crisis. Of course, there are also known cases of politicians from Great Britain, Ukraine and Poland who thought they were talking to Alexei Navalny’s staff, but were actually talking to Russian pranksters using this technology.
One of the most famous cases of fraud using deepfake was the approval of a loan of USD 243,000 from one company.