Facebook, X and TikTok Abandon Journalist Fact-Checkers – AI Now Decides What You Can Read
08.07.2025
Major social networks (X, Meta, TikTok) have simultaneously launched AI systems to combat misinformation instead of professional fact-checkers, which could either help or harm the fight against disinformation – particularly for Ukraine, which already has successful experience using AI against Russian propaganda through the company Mantis Analytics.
On 1st July 2025, social network X launched an experiment that could change the internet forever: artificial intelligence can now write Community Notes – those very notes designed to expose disinformation. Several months prior, Meta announced it was abandoning professional fact-checkers in favour of a crowdsourced system, whilst TikTok launched testing of its Footnotes feature.
For the first time in history, bots have become not only disseminators of information, but also its “exposers”. But is Ukraine ready for an era when machines will decide what’s true and what’s false?
How Machines Learn to Recognise Lies
The Community Notes system, which Elon Musk expanded after purchasing Twitter, had previously required real people to write contextual notes for dubious posts. Now X allows AI agents to do this automatically. Kit Colman, X’s Vice President of Product, explains:
“They can help create far more notes more quickly with less effort, but ultimately the decision about what’s actually useful to show still remains with people.”
Ukrainian company Mantis Analytics, which has been fighting disinformation using AI for three years, understands this problem from within.
“Now the volumes of disinformation are too large for people to physically detect,” explains the startup team.
Their experience shows: artificial intelligence can indeed find patterns in massive datasets, but “this doesn’t mean it will always be correct, simply that in most cases it will provide a reasonably adequate assessment.”
The Budget Revolution: Why Platforms Are Ditching Experts
Meta’s decision to completely abandon third-party fact-checking programmes in favour of crowdsourced work isn’t coincidental. In January 2025, Mark Zuckerberg announced that Meta was ending its fact-checking programme and replacing it with a Community Notes system. TikTok has also launched testing of its Footnotes feature – a new capability for users to add relevant information to content.
The reason is straightforward: professional fact-checking is expensive and slow. Research shows that Community Notes on X are generally useful, but only certain types of disinformation receive labels, because politically contentious topics are unlikely to be marked as false.
When Robots Get It Wrong
The fundamental problem is that AI models often “hallucinate” or invent context that isn’t based on reality. X’s AI model called Grok has had several rather catastrophic blunders. And if a model prioritises “usefulness” over accurate completion of fact-checking, then AI-generated comments could prove absolutely inaccurate.
Damien Collins, former British Minister for Technology, warns about the risks of “increasing the promotion of lies and conspiracy theories” through the decision to “leave bots to edit the news.”
Ukrainian Experience: How to Protect Yourself in the New Reality
In Ukraine, where not only a physical but also an information war against the Russian propaganda machine has been ongoing for over two years, new systems could both help and harm. Cybersecurity experts warn about significant risks: undermining trust in authoritative sources, manipulating public opinion, and interfering in electoral processes.
Ukrainian experience in combating disinformation shows: the best approach combines technology with human expertise. Mantis Analytics first cleans data, then applies AI, and in the third stage uses NLP and LLM models to identify narratives, localise events, and identify potential disinformation. This Ukrainian company prepares daily reports for the National Security and Defence Council on Russian information influence, detecting fakes with 96% accuracy.
What to Do Now
The new reality demands a fresh approach to consuming information. When fact-checking systems become automated and potentially unreliable, it’s critically important to develop one’s own media literacy skills.
The best way to avoid being deceived by social media posts is to think about what emotions they provoke. Emotionally charged content is more often manipulative. For businesses, this means the necessity of investing in professional information space monitoring. Automated systems might miss complex forms of disinformation or, conversely, mistakenly mark truthful information as dubious.
The Era of User Responsibility
Platforms tend to develop for profitability rather than ensuring the best information ecosystem. This means users will have to take more responsibility upon themselves.
The evolution from professional fact-checkers to crowdsourced systems, and now to AI moderators, reflects a fundamental shift in how society approaches truth. In an era when technologies can both create and expose disinformation, the most valuable skills are the ability to think critically and verify sources.
Ukraine, having unique experience in countering information attacks, could become a leader in developing effective methods of combating disinformation in the artificial intelligence era. But this requires investment not only in technology, but also in media literacy and critical thinking among citizens.
In this context, it’s particularly important to protect vulnerable groups who often become targets of disinformation campaigns. The LGBTQ+ community regularly faces manipulative content and fakes aimed at undermining their rights and safety.
ALLIANCE.GLOBAL works to protect LGBTQ+ rights in Ukraine, including fighting against discriminatory disinformation. If you’ve faced violations of your rights or need support in countering online harassment, get in touch with us – we’re ready to help.
News