share
Analysis

AI-Generated Content Threatens Information Credibility in Kosovo

Content generated by artificial intelligence, AI, is spreading rapidly across platforms like Facebook and TikTok, threatening the credibility of information in Kosovo and raising concerns of deeper polarisation and mistrust of the country’s fragile media landscape.

Just minutes after Kosovo MPs elected Vetevendosje’s Dimal Basha as Parliament Speaker on Tuesday, a fake AI-generated video surfaced on Facebook, Basha’s likeness claiming to be the father of Kosovo’s ethnic Ashkali community.

In under eight hours, the video amassed over 80,000 views on Facebook.

Data collected by fact-checking organisations shows that the rapid development of artificial intelligence has fueled an industry of video and audio manipulation.

Out of 70 pieces of content verified on Facebook during July and August by Krypometër, BIRN Kosovo’s fact-checking platform, certified by the International Fact-Checking Network, IFCN, and the European Fact-Checking Standards Network, EFCSN, 25 were AI-generated. 

The situation is worse on TikTok, where fact-checkers discovered more than 100 instances of falsified materials created with AI.

These manipulated pieces often target political leaders with degrading portrayals, like the recent fake videos of Serbian President Aleksandar Vucic falling down, or Kosovo’s acting Prime Minister Albin Kurti dressed as North Korea’s Kim Jong Un.

Social media expert Valon Kerolli told Prishtina Insight that the amount of false content circulating will increase, particularly in the run-up to Kosovo’s local elections in October.

“In a country like ours, where media literacy, technological literacy, and even general education are still very low, the impact of these videos is deeply concerning,” Kerolli said.

Kosovo ranks at the bottom in international assessments of critical thinking, with PISA tests placing its students among the weakest globally. The Ministry of Education has itself acknowledged the absence of media literacy in the public school curriculum. 

His own analyses have identified that AI-generated materials are being shared by hundreds of regular users.

“We see videos being shared thousands of times, watched by millions, and yet no one clarifies they are fake. By the time fact-checkers step in, the damage is already done,” he warned.

“This makes it extremely dangerous, and manipulation with this kind of content is happening silently, with no serious response from state institutions, civil society, or the media,” Kerolli added.

Kerolli argues that encouraging people to think critically about the information they consume should be society’s first line of defense.

Political parties recognise the consequences

Illustration/ Political Parties in Kosovo: BIRN

Illustration/ Political Parties in Kosovo: BIRN

Former Kosovo Prime Minister Ramush Haradinaj has repeatedly been the subject of AI manipulations. Fact-checkers have documented cases where false statements were attributed to him through fabricated videos and audio recordings.

Kushtrim Xhemaili, spokesperson for Haradinaj’s party, confirmed that within the party they regard AI-manipulated videos and audio recordings as a serious problem.

“We constantly face organised attacks on social media—not just during election campaigns but all the time —mainly on Facebook and TikTok. 

This phenomenon, he adds, “is especially dangerous because AI-generated content looks so real that audiences often fall prey to propaganda.”

“Moreover, the inability to remove this content from social platforms makes it even harder to deal with,” Xhemaili said.

While political opponents are suspected of producing much of this content, external interference is not excluded. 

Two BIRN investigations in 2023 and 2025 found Kremlin-linked disinformation campaigns fueling hate speech and crises in Kosovo. Researchers caution that Russian networks may also use AI to generate and multiply disinformation.

According to sociologist Artan Muhaxhiri, the danger is that such manipulations distort the electoral process itself.

“If these strategies succeed, the governments formed afterward will not reflect the real will of the electorate,” he warned. 

According to Muhaxhiri, deepfakes are particularly dangerous because they, “blur the boundary between truth and falsehood.”

Such content, he explains, “creates a hyper-reality that undermines the public sphere and complicates normal communication.”

The response, he argues, must be urgent and professional to prevent negative socio-cultural effects from multiplying.

Dren Gërguri, professor at the Faculty of Journalism, emphasises that AI-driven disinformation can destabilise democratic processes.

“The consequences range from spreading disinformation that can influence key social processes—such as election results—to damaging public trust in institutions and deepening polarisation,” he said.

Gërguri stresses the need for both formal and informal initiatives that prepare citizens for today’s information environment.

“Alongside media literacy, we must also work on artificial intelligence literacy, so that citizens understand more about AI and how this technology functions,” he added.

To prevent the negative effects of AI-manipulated videos during election campaigns, Muhaxhiri insists that political parties should react immediately and forcefully to every instance.

“It is crucial for influential media outlets to have dedicated fact-checking teams. This cleanses public discourse of AI-generated pollution, reduces its impact on opinion, and discourages those who might use it in the future,” he concluded.

Muhaxhiri adds that the situation becomes even more problematic when manipulated content influences older generations.

“Studies show that deepfake videos mostly reinforce existing political beliefs rather than change them. Older people are more vulnerable because they take them more seriously, unlike younger generations who, being more technologically aware, often treat them satirically—much like internet memes,” he explained.

English version prepared by Ardita Zeqiri

read more: