Yes, you can recognize pre -election manipulation. The expert suggests

Urna wyborcza, haker, zdjęcie ilustracyjne

Poland is the most common target of hacker attacks throughout the European Union. This year’s presidential election can therefore become not only a field of clash of political programs and visions of candidates, but also a potential place of confrontation in the digital space.

Can we trust what we see and hear in artificial intelligence and Deepfake materials?

Poland under digital pressure

The report from the European Service of External Action shows that only in 2024 over 90 and 300 organizations fell victim to the so -called foreign information manipulation and interference. These activities included disinformation, attempts to influence and sabotage, among others around important events such as presidential elections in Moldova and Romania, farmers’ protests in Germany or the Olympic Games in Paris.

According to specialists from the Check Point Research, Poland is at the forefront in terms of the number of cyber attacks. Last year, our country was the target of almost 100,000 such attacks, and public infrastructure recorded an average of 2041 incidents per week – more than a global average of 1782. Microsoft Digital Defense reports that Poland is third in Europe and the ninth in the world when it comes to the threat of cyber attacks sponsored by other countries.

Cyberrataki – what for?

The goal of hackers does not have to be direct interference in the results of voting. It is often about something much more subtle, but equally dangerous – undermining trust in the whole election process, deepening divisions in society, discouraging citizens from voting.

As Łukasz Boduch, an architect of IT solutions at Softserve Poland, malicious activities may include DDOS attacks on the parties of the National Electoral Commission, Political Parties or institutions dealing with cyber security. Phishing campaigns aimed not only at candidates, but also their families, election staffs and entire groups are also possible.

One of the most dangerous tools is today manipulated content in social media – including false videos, i.e. so -called deepfakes. They can present politicians who speak words they have never said. They are often so realistic that the average citizen has no chance to recognize them as false.

Artificial intelligence and social media. New battlefield

The development of artificial intelligence technology allows for instant creation and dissemination of manipulated materials. Deepfake, bots and false accounts create an illusion of mass support or opposition to specific narratives. Their goal is to arouse emotions, polarization of opinions and manipulate electoral decisions.

– Critical thinking is the basis. Let’s check the sources and look for confirmations in other official sources or reliable media, let’s separate the facts from the opinion. Particular caution should be taken into account by the content that causes strong emotions or sensational -sounding news and “leaks”, especially those that appear at the last minute, for example just before the election silence, and even during it – says Boduch.

How to recognize pre -election manipulation?

The most important is vigilance and a critical approach to the information we encounter – especially in social media.

The specialist warns to check the sources, compare information with official messages and reputable media. You should also watch out for content that causes strong emotions or sensational headers – especially those that appear just before the election silence.

– In the case of visual materials that they were created by AI, most often shows that the image looks unnatural, the characters often have glassy, ​​empty eyes and are shown either without limbs, or e.g. with six or four fingers with a hand. In the video, it is worth paying attention to unnatural movements and facial expressions and whether the sound is synchronized with the image or looped. Artificial intelligence is also not good for creating elements outside the first plan; They are often blurred, they look artificial – adds the expert.

It is also worth verifying profiles publishing content of content – how long they exist, what they published earlier, whether they have the activity of other users. Let’s also be careful of language errors and content out of context.

In the event of suspected manipulation or cyber -east, you can report an incident, for example, through the SaferwyWybory.pl portal. This is our way to realize the safety of elections and society.

Similar Posts