AI Developers Are Playing Dirty – And We’re Forced to Join Their Game
15.07.2025

OpenAI earns $833 million monthly training ChatGPT on journalists' and writers' work without a penny in compensation to authors. Courts have finally begun hearing copyright infringement cases, but whilst legislation struggles to keep pace with technology, AI giants continue rigging the game in their favour. The situation is particularly complex for Ukrainian organisations: to remain relevant and serve their communities, they're compelled to use tools built on questionable ethical foundations.
OpenAI is earning approximately $833 million monthly – that’s $10 billion in annual revenue as of June 2025. A portion of this money represents direct monetisation of others’ labour. New York Times journalists, writers, artists – their work became ‘fuel’ for ChatGPT, yet the authors haven’t seen a single penny in compensation.
What’s most infuriating isn’t the mere fact of content theft. What’s maddening is that they’ve been doing this for years, knowing full well it’s unethical, but understanding that whilst legislation lags behind technology, they could cheerfully pillage with impunity.
Then, when they’re pressed with legal action, they claim: ‘But we’re using publicly available data!’ Publicly available, yes – but does that grant the right to earn billions from others’ creative work?
They Knew Exactly What They Were Doing
In November 2024, a young OpenAI researcher died – Suchir Balaji. He’d left the company over disagreement with their use of copyrighted data and penned a detailed essay explaining why the company was breaking the law. A month after publishing the essay, he was found dead in his San Francisco flat – the medical examiner ruled it suicide.
No conspiracy theories here – simply the tragedy of someone who couldn’t remain silent. But his story reveals that inside these companies are people who grasp the scale of the problem. ‘If you believe what I believe, then you should just leave the company,’ he told The New York Times in his final interview.
The Courts Have Started Moving
The New York Times filed suit against OpenAI and Microsoft in December 2023, seeking billions in compensation for using millions of articles without permission. In March 2025, a federal judge allowed the case to proceed, dismissing only a portion of the less significant claims.
This isn’t an isolated incident. Writers, comedians, and artists have filed dozens of lawsuits against OpenAI and other AI companies. Each could cost the companies up to $150,000 per wilful copyright infringement.
And Now About Us
Here’s the real challenge of the situation: we, organisations like ALLIANCE.GLOBAL, face a difficult choice. To remain relevant, to help our communities, we’re forced to use tools built on dubious ethical foundations.
When we teach people to work with AI – and this is critically important for their future – we consciously choose to use the best available tools. When we use ChatGPT for content creation or Google AI for analytics, we make a pragmatic choice between an ideal world and the real needs of our people.
But there’s no alternative. If we ignore AI, our people will lose out in the job market to those who master these tools. Meanwhile, OpenAI is actively collaborating with the military, despite their stated mission to ‘ensure that artificial general intelligence benefits all of humanity’.
Particularly Challenging for Ukraine
Following the full-scale war, Ukraine has been actively collaborating with tech companies, developing cutting-edge AI solutions for defence. Ukrainian agencies have consciously chosen partnership with American companies to strengthen defence capabilities. Palantir works with more than half a dozen Ukrainian departments, from the Ministry of Defence to the Ministry of Education, helping analyse data to protect the country.
Simultaneously, the US Africa Command (AFRICOM) considers OpenAI technologies ‘mission-critical’ for their operations – demonstrating how technologies developed for one purpose get used in different contexts worldwide.
These technologies genuinely help our defenders defeat the enemy. But questions arise about fair distribution of benefits from innovations created during wartime. Ukrainian companies are developing their own AI models, using battlefield data to create solutions that help save Ukrainian lives.
Tech Giants Are Changing the Rules
Meanwhile, the situation is evolving faster than many realise. OpenAI, which just a year ago prohibited using their products for any military purposes, now actively works with the Pentagon. In December 2024, they announced a partnership with weapons manufacturer Anduril to develop military AI.
Meta changed its policy, allowing the military to use its open AI technologies. Anthropic signed deals with Amazon and Palantir to sell their algorithms to the military.
It turns out ‘AI democratisation’ is simply a new way of creating dependency. Companies that built their models on stolen content now dictate the terms of using their own technologies.
What Are We Doing?
We’re not being fooled by fine words about ‘innovation’ and ‘benefit to humanity’. These companies are creating new forms of technological dependency, where organisations from smaller markets become consumers of technologies often developed using global content without fair compensation to authors.
But simultaneously, we can’t afford to fall behind. Our people – the LGBTIQ+ community, which already faces discrimination – cannot afford technological disadvantage as well.
So we make a difficult choice: we teach AI skills through our programmes, but we also tell the truth about how these technologies were created. We explain why it’s important to demand transparency, fair compensation for creators, and ethical standards. Our artificial intelligence courses don’t just teach technical skills but also develop critical thinking about AI technologies.
Our project with Google.org AI Opportunity Fund isn’t capitulation to the system. It’s an attempt to give our people tools so they can compete in a world whose rules we didn’t set. But it’s also an opportunity to show that technology can be used to protect vulnerable communities, not merely to enrich corporations.
Rapidly Changing Rules
Most frightening is how quickly the rules change. OpenAI expects losses of $14 billion by 2026, despite record revenues. This forces the company to seek new profit sources – and military contracts become increasingly attractive.
When technologies trained on millions of people’s works become tools of warfare, we have the right to ask: where’s our share of the profits? Where’s our voice in deciding how these technologies are used?
Perhaps our generation cannot fundamentally change the rules. But we can ensure the next generation is better prepared for this struggle. We can teach our people not only to use AI but to analyse it critically, demand transparency and fairness.
Sometimes the best strategy is to use available tools to create a more just world whilst simultaneously working to change the rules themselves. We teach our people not only to use AI but to analyse these technologies critically, understand their limitations, and demand greater transparency from developers.
News