AI-generated disinformation and fake news – understanding the role of algorithms in media consumption and how AI shapes information exposure to improve digital literacy skills
In the last few years, frequent internet and social media users have likely come across “street interview” videos, where content creators pose lighthearted, humorous, or thought-provoking questions to random passersby. However, have you seen this new video of a street interview of medieval peasants and people about their daily lives?
We showed this video to people in Podgorica, Montenegro and asked them their thoughts about it. One of them, Antonella Balic, commented:
“I would recognize it [the video above] is AI generated – it is good for cartoons, movies, for fun. But if it is for real people, speeches, etc, it might be controversial”
How did this video make you feel? Did you find it funny? Because it certainly is. However, there is a darker side to this type of content. With the rise of AI-generated videos, some people have been impersonating doctors and medical experts to endorse unlicensed or fraudulent health products, real people about current political events, fake videos of politicians and world leaders engaged in embarrassing, illegal, or controversial acts and many more examples. And this has been happening all over the Western Balkan region.
For example, in Albania, the fact-check organization Faktoje reviewed a video which depicts an impossibly large snake swimming on the surface of a large river, all while it is recorded from a helicopter. While some might find it amusing, others might get scared. But, as the reviewers from Faktoje show, it is fake.
In neighboring North Macedonia, people have used real footage of a famous doctor Zhan Mitrev, and make a deep-fake of him as he is promoting a herbal tea that is supposedly banned by the country. As it is noted in this report by the fact-checking organization Truthmeter, it is fake. You can see the video here:
Across the border, a member of the Serbian Parliament posted an AI generated photo on X, showing student protesters on their way to Strasbourg eating food under the Croatian flag. It’s obvious that it is AI generated photo, Raskrinkavanje finds.
AI-generated videos and images are often deleted rapidly, complicating fact-checkers’ efforts to archive them as evidence. This ephemerality, combined with swift reposting in altered forms, undermines systematic tracking and allows disinformation to persist across platforms, evading accountability and amplifying its spread before verification occurs.
A fact-checker from Montenegro, Nina Đuranović (Raskrinkavanje.me), said that in an era where the boundary between genuine human expression and machine-generated content is increasingly blurred, the ability to discern fabricated material has become essential to safeguarding both the autonomy of human thought and the integrity of informed decision-making.
“As algorithms have a growing influence over public discourse, the irresponsible deployment of AI technologies poses a profound risk — from the insidious proliferation of disinformation to the gradual erosion of democratic principles”, she said.
She added that concrete examples, such as those documented by the fact-checking organization she is a part of, underscore this concern: when the average internet user encounters a video purporting to show Russian aircraft destroyed by Ukrainian forces, footage of a vessel allegedly discharging sewage into the sea, or an image depicting world leaders engaged in implausible physical gestures, their understanding of reality can be subtly — yet significantly — distorted, shaping opinions and reinforcing false narratives.
Similarly, another fact-checker from Kosovo, Hyrije Mehmeti (Hibrid.info), explained why is important to recognize AI-generated content:
“It is essential for maintaining trust, transparency, and accountability in digital communication. And as AI tools become more advanced, distinguishing between human and machine-produced content helps audiences critically assess the source, intent, and credibility of information”.
She said the consequences of using AI-generated content without disclosure include the spread of misinformation, loss of public trust in media and institutions, and potential manipulation of public opinion.
“Furthermore, in academic and journalistic contexts, it also raises ethical concerns regarding authorship, originality, and intellectual honesty”, said Mehmeti.
Media and digital literacy have become more critical than ever as AI-generated models grow increasingly sophisticated and difficult to distinguish from authentic content with the naked eye. If fake content is nearly impossible to detect, bad actors can exploit it for their own agendas — potentially undermining political processes and even threatening the foundations of democracy.
To protect ourselves from AI-generated disinformation, we must remain vigilant and always double-check suspicious content. Consult tech-savvy friends, family, or fact-checking organizations—and review comment sections for additional context. Before sharing, pause and verify: check before you share to help stop the spread of fake news.
AUTHORS
Geri Emiri, Journalist
Dallandyshe Xhaferri, Journalist
Matej Trojachanec, Fact checker
Biljana Matijašević, Journalist
Naida Odobašić, Young European Ambassador
“I would recognize it [the video above] is AI generated – it is good for cartoons, movies, for fun. But if it is for real people, speeches, etc, it might be controversial”
How did this video make you feel? Did you find it funny? Because it certainly is. However, there is a darker side to this type of content. With the rise of AI-generated videos, some people have been impersonating doctors and medical experts to endorse unlicensed or fraudulent health products, real people about current political events, fake videos of politicians and world leaders engaged in embarrassing, illegal, or controversial acts and many more examples. And this has been happening all over the Western Balkan region.
For example, in Albania, the fact-check organization Faktoje reviewed a video which depicts an impossibly large snake swimming on the surface of a large river, all while it is recorded from a helicopter. While some might find it amusing, others might get scared. But, as the reviewers from Faktoje show, it is fake.
In neighboring North Macedonia, people have used real footage of a famous doctor Zhan Mitrev, and make a deep-fake of him as he is promoting a herbal tea that is supposedly banned by the country. As it is noted in this report by the fact-checking organization Truthmeter, it is fake. You can see the video here:
Across the border, a member of the Serbian Parliament posted an AI generated photo on X, showing student protesters on their way to Strasbourg eating food under the Croatian flag. It’s obvious that it is AI generated photo, Raskrinkavanje finds.
AI-generated videos and images are often deleted rapidly, complicating fact-checkers’ efforts to archive them as evidence. This ephemerality, combined with swift reposting in altered forms, undermines systematic tracking and allows disinformation to persist across platforms, evading accountability and amplifying its spread before verification occurs.
A fact-checker from Montenegro, Nina Đuranović (Raskrinkavanje.me), said that in an era where the boundary between genuine human expression and machine-generated content is increasingly blurred, the ability to discern fabricated material has become essential to safeguarding both the autonomy of human thought and the integrity of informed decision-making.
“As algorithms have a growing influence over public discourse, the irresponsible deployment of AI technologies poses a profound risk — from the insidious proliferation of disinformation to the gradual erosion of democratic principles”, she said.
She added that concrete examples, such as those documented by the fact-checking organization she is a part of, underscore this concern: when the average internet user encounters a video purporting to show Russian aircraft destroyed by Ukrainian forces, footage of a vessel allegedly discharging sewage into the sea, or an image depicting world leaders engaged in implausible physical gestures, their understanding of reality can be subtly — yet significantly — distorted, shaping opinions and reinforcing false narratives.
Similarly, another fact-checker from Kosovo, Hyrije Mehmeti (Hibrid.info), explained why is important to recognize AI-generated content:
“It is essential for maintaining trust, transparency, and accountability in digital communication. And as AI tools become more advanced, distinguishing between human and machine-produced content helps audiences critically assess the source, intent, and credibility of information”.
She said the consequences of using AI-generated content without disclosure include the spread of misinformation, loss of public trust in media and institutions, and potential manipulation of public opinion.
“Furthermore, in academic and journalistic contexts, it also raises ethical concerns regarding authorship, originality, and intellectual honesty”, said Mehmeti.
Media and digital literacy have become more critical than ever as AI-generated models grow increasingly sophisticated and difficult to distinguish from authentic content with the naked eye. If fake content is nearly impossible to detect, bad actors can exploit it for their own agendas — potentially undermining political processes and even threatening the foundations of democracy.
To protect ourselves from AI-generated disinformation, we must remain vigilant and always double-check suspicious content. Consult tech-savvy friends, family, or fact-checking organizations—and review comment sections for additional context. Before sharing, pause and verify: check before you share to help stop the spread of fake news.
AUTHORS
Geri Emiri, Journalist
Dallandyshe Xhaferri, Journalist
Matej Trojachanec, Fact checker
Biljana Matijašević, Journalist
Naida Odobašić, Young European Ambassador
Please wait while your video is being uploaded...
Don't close this window!