Le Monde – (Opinion piece) Call from a hundred NGOs: ‘AI as developed is fuelling a global exploitation system’.
On the initiative of Feminists Against Cyber Harassment, VoxPublic, Amnesty International France and the Ligue des Droits de l’Homme, an opinion piece signed by around a hundred organisations in Le Monde calls for an approach to artificial intelligence based on human rights and social and environmental justice.
As France prepares to host the Artificial Intelligence (AI) Action Summit, more than a hundred civil society organisations are sounding the alarm: human rights and environmental protection must stop being sacrificed on the altar of technological progress. Current AI developments perpetuate discriminations, exacerbate inequalities, destroy the planet, and fuel a system of global exploitation. As these issues will not be included in the Summit’s official program, we’ve outlined them here.
As AI technologies are developing rapidly, political leaders seem in no hurry to consider the human, social and environmental issues they raise. Ignoring the warnings of civil society organizations, they prefer to view them solely through the prism of growth, productivity gains, and profit.
The potential future existential risks posed by AI are a distraction: these technologies already have very concrete impacts on the most vulnerable and discriminated populations, and already undermine human rights. As they are built on biased datasets and embed the skewed worldviews of their designers, AI tools perpetuate stereotypes, reinforce social inequality and limit access to resources and opportunities. Moreover, AI systems are deployed within the discriminatory and unequal structures that exist in every society. Their uses, often against a backdrop of austerity policies, deepen inequalities in access to health, employment, public services and social benefits. The scandals that have erupted in recent years are clear evidence of this: health algorithms with sexist and racist biases, an Austrian employment service algorithm refusing to direct women towards the IT sector, profiling and discrimination against welfare beneficiaries in France, Denmark and the Netherlands.
Yet technologies are rarely the solution to fundamentally systemic problems. It would be better to address the root causes of these issues rather than risk exacerbating human rights violations with AI systems. As more decisions are entrusted to algorithms, their biases can have dramatic consequences on our lives. Predictive AI systems are increasingly used in justice and law enforcement, with the risk of amplifying systemic racism. For instance, in the United States, an AI tool used to calculate recidivism risks identified Black defendants as ’high risk’ twice as often as white defendants. But even if these biases were mitigated, focusing on predictive tools distracts us from considering broader reforms to the prison system.
These systems are also used for surveillance and identification purposes in border control or conflict settings, such as Lavender, an AI tool to target terrorists that caused the deaths of thousands of Gaza civilians. Often, these technologies are developed in the Global North, like the tools created in Europe used to surveil the Uyghur population in China.
Generative AI systems are also exploited for disinformation and destabilization purposes by repressive regimes and private actors. Bots to manipulate information on health-related issues, racist disinformation during the last European elections, and audio and video deepfakes featuring electoral candidates are just some examples of how these technologies pose threats to the rule of law. AI-generated content also endangers women and children: 96% of these deepfakes are non-consensual sexual content, widely used to harm women and produce child sexual abuse material.
Moreover, these impacts are part of a global system of exploitation. AI, particularly generative AI, is an environmental disaster. By 2027, generative AI will require as much electricity as what is consumed by countries like Argentina or the Netherlands. The carbon emissions of Big Tech increased by 30 to 50% in 2024 due to the rapid development of these technologies. And the Global South is the most affected, with a proliferation of data centers proliferate and the extraction of ore like cobalt (used in batteries, for instance), harms the health of populations, pollutes water and soil, and fuels violence and armed conflicts.
Inequalities between the Global North and South are also exacerbated by technologies used for online content moderation. Digital giants allocate more resources to the Global North, favoring certain dominant languages and cultural narratives at the expense of others. Not to mention that AI systems are predominantly trained by exploited and underpaid workers from the Global South. For example, OpenAI paid Kenyan workers less than two dollars an hour to conduct the violent and taxing job of labelling toxic content.
In light of these colossal issues, the European AI Act, presented as an instrument to protect rights and freedoms, falls short, particularly on issues of surveillance and predictive policing. Moreover, this regulation will not apply beyond the borders of the European Union, even though the threats to human rights and the environment are global, and the export of surveillance AI generates profits for European companies. While European governments call for “sovereignty” in AI, the challenges posed by these systems transcend borders. Far from being merely a technological issue, AI concerns everyone. Everyone should have the ability to shape its development—or reject it if it does not align with our vision of society. True progress lies in binding frameworks, democratic developments, and approaches centering international solidarity and the most affected communities, in order to place human rights and environmental justice at the core of AI regulation.
The list of signatory organisations is available here.

Stay connected
Get our latest news, personal stories, research articles, and job opportunities.