Share:

A practical guide for activists and human rights defenders on using artificial intelligence to automate routine tasks in fact-checking whilst maintaining human control over analytical work and conclusions.

During war, information becomes a weapon. Every day, Ukrainians receive hundreds of messages about new laws, benefits, and threats. Some are true, some are disinformation, and the rest are half-truths that can be even more dangerous than outright fakes.

According to EU vs Disinformation, over 5,500 cases of disinformation about Ukraine have been recorded since 2015. Since the full-scale invasion began, this figure has multiplied. The European Digital Media Observatory counted over 2,300 fact-checks just in the first months of the war.

At such a scale of disinformation, traditional fact-checking methods become ineffective. Organisations like StopFake publish content in 14 languages, VoxCheck has created a database with over 9,000 political statements, but each verification still requires hours of manual work.

This is precisely where artificial intelligence can become an ally rather than a threat. Not to replace human thinking, but to free up time from routine tasks for proper analysis.

What AI can automate

The fact-checking process consists of several stages. First, you need to find primary sources – official documents, statistics, statements from officials. Then compare different versions of information, analyse context and chronology of events. Finally, draw conclusions about credibility.

The first three stages are predominantly mechanical work. Searching through databases, comparing figures, gathering facts. This is where AI can save enormous amounts of time. Research from the University of Zurich showed that hybrid “human + AI” systems achieve accuracy of up to 89%, reducing initial processing time by 40-60%.

The fourth stage – evaluation and conclusions – remains purely human work. AI doesn’t understand political subtext, can’t sense nuances, and cannot assess the consequences of publishing particular refutations.

Tools available today

ClaimBuster from the University of Texas provides a free API that automatically identifies claims in text that require verification. Google Fact Check Explorer gives access to over 330,000 fact-checks from verified organisations worldwide.

There are more advanced solutions as well. Originality.ai shows 72.3% accuracy for new facts at a cost of $0.10 per 1,000 words. The system verifies, provides confidence ratings, cites sources, and explains the logic of its conclusions.

Particularly interesting is Loki (OpenFactVerification) – an open system available on GitHub. It works through a five-stage algorithm: breaking complex claims into simpler ones, assessing their importance, generating search queries, gathering evidence, and formulating conclusions. It supports not only text but also audio, images, and video.

From theory to practice: three real scenarios

Consider a hypothetical situation: you need to verify information about new social payments. The traditional approach takes 15-20 minutes – opening several government websites, searching by keywords, comparing dates and amounts.

With AI, the same process looks different. You ask: “Find official documents from Ukraine’s Ministry of Social Policy about new payments from the last two months. Provide only links to gov.ua”. The system finds relevant documents in 30 seconds or reports that no official information exists. Your task is to analyse the found results and draw conclusions.

Another scenario: analysing recurring narratives. If the same thesis appears in multiple sources, AI can quickly identify similarities, find possible sources of origin, and trace the path of information spread.

The third case: verifying statistics. When someone claims something about unemployment rates, inflation, or number of refugees, AI can instantly find the latest official data from the State Statistics Service, National Bank of Ukraine, or UN for comparison.

Limitations that must be understood

AI isn’t a magic wand. Research from the Reuters Institute shows serious limitations of the technology for non-English languages. Systems trained primarily on Western sources often don’t understand the context of events in Ukraine and may miss important nuances of Ukrainian legislation or politics.

“Hallucination” rates – when AI fabricates facts – reach 15-30% for complex claims. Systems can make mistakes with dates, confuse references, or attribute quotes to the wrong people. Therefore, any information obtained from AI needs to be verified.

The biggest problem is the lack of contextual understanding. AI can find an official document but not realise it’s outdated, cancelled, or refers to a different region. It can find statistics but not account for peculiarities in calculation methodology.

How others do it: successful examples

Full Fact in the UK has created the world’s most comprehensive AI fact-checking system. Their platform processes about 330,000 sentences daily from news, television, podcasts, and social media. The system automatically identifies verifiable claims, prioritises them by importance, and passes them to journalists for final verification.

The Norwegians chose an interesting approach. Faktisk Verifiserbar is a cooperative of major media outlets that jointly fund AI tool development. They’ve created specialised systems for photo geolocation and military equipment identification, and publish all developments in open access.

However, not all experiments are successful. Georgian MythDetector reports that AI tools trained on Western languages miss nuances of Georgian politics. African GhanaFact has altogether refused to use AI due to cultural biases in training data.

Ethics and ecology: the price of automation

Using AI raises important ethical questions. UNESCO has established ten principles for ethical artificial intelligence, including respect for human rights, fairness, and environmental responsibility.

The last point is particularly relevant. Training GPT-3 consumed 1,287 megawatt-hours of electricity and generated 552 tonnes of CO2 equivalent. Each ChatGPT query uses ten times more electricity compared to a Google search.

For organisations with limited resources, this means the necessity of intelligent use: for truly routine tasks, not for replacing human thinking. It’s better to spend time learning to work effectively with AI than making hundreds of unnecessary queries.

Practical tips for beginners

Start simple. Use AI for structuring information – let it break down a long document into main points. Ask it to find specific data or statistics. Formulate clear requests: “Find official State Statistics Service statistics on Ukraine’s unemployment rate for 2024. Provide a direct link to the source”.

Always verify results. AI can make mistakes with links, confuse dates, or present outdated information as current. Use it as the first link in the verification chain, not as the final authority.

Structure your requests: first the role (“You’re helping verify facts”), then the task (“Find official information about…”), finally the criteria (“Use only government websites”).

Remember the limitations. AI doesn’t understand Ukrainian realities like local experts do. It may miss important context, not account for legislative specifics, or misinterpret political statements.

The future of fact-checking

Artificial intelligence won’t replace journalists and activists in fighting disinformation. But it can become a powerful tool that frees up time for more important work – in-depth analysis, finding cause-and-effect relationships, understanding the motives of fake news spreaders.

In a world where disinformation spreads at the speed of light, every minute matters. AI can provide the speed that allows keeping up with fakes without losing verification quality.

The main thing is to remember: technology is a tool, not a goal. The goal is to protect people from manipulation and give them reliable information for making correct decisions. And in this matter, AI can become a reliable ally if you learn to work with it properly.

If you work in the human rights field, it’s worth considering learning the basics of working with artificial intelligence. Today, this is no longer a tech geek’s whim but a basic skill that can dramatically change the effectiveness of your work.

News