450 million Europeans could have their private messages scanned automatically
450 MILLION
private messages scanned
350 ORGANISATIONS
denounce the measure
5 EUROPEAN STATES firmly opposed
An explosive project in Brussels
Brussels. Under its almost cold technical name, Chat Control 2.0 has become a political battleground. Officially, the text aims to protect children by requiring digital platforms to automatically scan private messages. In reality, an unprecedented shift is taking place: the messaging systems most used by 450 million Europeans could be transformed into widespread surveillance machines.
European Commissioner Ylva Johansson accepts the political risk. “We can no longer turn a blind eye. Every day, thousands of images of child abuse circulate. The status quo is untenable”, she repeats. But on the other side of the debate, NGOs, journalists’ unions and part of Parliament are denouncing the radical undermining of end-to-end encryption, the technical guarantee that protects our conversations from prying eyes. The debate is all the more lively because it is not limited to a quarrel between experts: it directly affects our conception of privacy, freedom of expression and European democracy.
How does Chat Control 2.0 work?
Today, secure messaging systems operate on a simple principle: end-to-end encryption. This is the equivalent of a locked letter, where only the sender and recipient have the key. Neither WhatsApp, Signal nor Messenger can read what is written. Tomorrow, if the project is adopted, the logic will change: even before the letter reaches its recipient, it will be opened by a machine that will examine its contents. If anything is deemed suspicious, an automatic report would be sent to the authorities.
This difference may seem technical, but it overturns the very promise of confidentiality. Platforms would have to implement detection algorithms trained to identify child pornography images or conversations deemed suspicious. These tools would be permanently activated on all users, without distinction.
| Current situation | With Chat Control 2.0 |
|---|---|
| Encrypted, tamper-proof messages | Messages scanned before delivery |
| Source confidentiality protected | Risk of exposure for journalists and NGOs |
| Targeted alerts | Automatic reporting, with no human filter |
| Privacy guaranteed | Privacy compromised by default |
Understanding encryption in 30 seconds
Imagine you’re sending a letter to a friend. You lock it with a padlock to which only the two of you have the key. Neither the postman, nor the post office, nor anyone else can open it. This is end-to-end encryption.
With Chat Control 2.0, the process changes: before the letter arrives, a machine opens it, reads it, sometimes saves an extract, then closes it again. When it arrives, you think it has been kept confidential. But in reality, it has been opened on the way.
The major risks: false positives and self-censorship
Supporters of the text claim that technology can distinguish between illegal content and innocent exchanges. But the history of artificial intelligence shows a more nuanced reality. False positives – cases where a photo or message is wrongly identified as suspicious – could number in the millions.
In the United States, a father has already had his Google account blocked after sending a medical photo of his child to his doctor. Transposed to a European scale, this type of error becomes dizzying. Holiday photos, intimate conversations, medical exchanges: anything can be misinterpreted by an AI.
Beyond individual errors, it is the collective climate that is changing. When people know that their message may be read, they censor themselves. Neuroscientists call this thepanoptic effect: feeling that you are being watched changes the way you write, testify and create. A society under constant surveillance becomes more cautious, less creative and less daring.
Table: cognitive effects of surveillance
| Current situation | With Chat Control 2.0 |
|---|---|
| Encrypted, tamper-proof messages | Messages scanned before delivery |
| Source confidentiality protected | Risk of exposure for journalists and NGOs |
| Targeted alerts | Automatic reporting, with no human filter |
| Privacy guaranteed | Privacy compromised by default |
A fractured European Union
On this issue, traditional political lines are breaking down. In the Council, Germany, Austria and the Netherlands rejected the text, arguing that it threatened the proportionality guaranteed by European law. France, Spain and Italy are defending a robust version, in the name of the safety of minors.
In Parliament, the divisions are just as deep. Left-wing groups, ecologists and critical liberals refuse to sacrifice privacy, while part of the EPP (right) supports the Commission. “Europe must not choose between protecting children and protecting democracy. It must do both”, insists Sophie in ‘t Veld MEP.
This tug-of-war reflects a wider divide: what kind of Europe do we want? A Europe that protects freedoms or a security-conscious Europe prepared to sacrifice everything for an immediate objective?
Credible alternatives
Critics of the project point out that other solutions exist. Rather than imposing a systematic scan, there are several ways of reconciling security and respect for fundamental rights:
- Stronger voluntary reporting: simplify the filing of complaints and provide legal protection for associations and citizens who report incidents.
- User-side tools (opt-in): activated by default for minors, optional for adults, without breaking global encryption.
- Judicial cooperation and human resources: invest more in specialised units, strengthen cross-border cooperation, pool expertise.
- Transparent audits of algorithms: companies required to publish their false positive and negative rates.
- Targeted action on illegal content: dereferencing and blocking content that has already been identified, rather than scanning all conversations.
“The EU must innovate in investigation, not mass surveillance”.
Manon aubry, president of the group the left
Neuroscience: why is this debate causing such a stir?
If the project arouses such emotion, it’s also because it touches on three major psychological levers:
- Intimacy: the basis of personal safety. When it disappears, the brain goes on alert.
- Panopticism: knowing that you are being watched changes behaviour, even if no-one is actually reading your messages.
- Uncertainty: not knowing what is being monitored creates stress, mistrust and social withdrawal.
These mechanisms are not abstract: they explain why people are so passionate about this debate, even if they don’t understand all the technical details. It’s a question of collective trust.
Three scenarios for the future
In Brussels, negotiations are continuing. Three main outcomes are emerging.
| Scenario | Consequences for society | Political risks |
|---|---|---|
| Strict adoption | Widespread surveillance, self-censorship, loss of confidence | Image of a freedom-destroying EU, legal disputes |
| Targeted compromise | Scan limited to minors / opt-in, independent audits | Ongoing debate, successive adjustments |
| Abandon | Preservation of encryption, other tools strengthened | Accusations of “inaction”, need for a Plan B |
Whatever the outcome, the precedent will be heavy. Europe can choose to remain the continent that protects data (as with the RGPD), or become the continent that trivialises surveillance.
Conclusion: security or freedom?
The question is often posed as a dilemma: security or freedom. But in reality, democratic societies cannot sacrifice one to save the other. Protecting children is an absolute necessity, but it must be done by targeted, proportionate and transparent means. Widespread surveillance, on the other hand, would weaken everyone’s freedoms without guaranteeing effectiveness.
The vote expected in 2025 will be a test for the EU: will it be able to protect the most vulnerable without betraying its promise of democracy and freedom?
FAQ
What is Chat Control 2.0?
A draft European regulation that would require automatic scanning of private communications to detect paedophile content.
Why is it controversial?
Because it calls into question end-to-end encryption and paves the way for widespread surveillance.
Which countries are in favour or opposed?
France, Spain and Italy support it. Germany, Austria and the Netherlands are strongly opposed.
What are the main risks?
False positives, self-censorship, the end of confidentiality of sources for journalists and NGOs.
What alternatives exist?
Enhanced reporting, opt-in tools, judicial cooperation and algorithm audits.
Table of contents for the article Chat Control 2.0