Contents
Is this the end of privacy in the EU?
Is this the end of Private Messaging in the EU?
While Most EU legislation passes silently in the background, few proposals have triggered as much backlash as the Regulation to Prevent and combat child sexual abuse.
By now its more commonly known as ‘Chat Control’, It’s about new rules for tech companies and online services such as WhatsApp or TikTok to detect and remove child sexual abuse material, or CSAM for short, from their platforms. Including in private communications.
This is such a huge overhaul that the European Data Protection Supervisor claimed ‘the CSAM proposal would fundamentally change the internet and digital communication as we know it, and that will be a point of no return.’
The radical proposals by the European Commission and Council have left citizens, digital rights organisations, and also the EU Parliament fearing mass surveillance and the end of confidential online communication.
It’s a proposal that has put two camps against one another in a fierce lobbying battle: a ‘pro-child protection’ proponent camp on the one side, versus a pro-privacy and digital rights camp on the other.
So what exactly is the CSAM proposal, what does it say about how we police online content, and what does it mean for the future of privacy in the EU?
Why proposal for legislation in first place?
So let’s talk about what the problem actually is: the EU has a child sexual abuse material problem.
According to NCMEC, the US National Center for Missing and Exploited Children, saw an increase of reports on the EU from 23.000 in 2010 to 1.5 million in 2022, with over 5 million images and videos.
And the Commission wants providers to take responsibility for the content on their platforms.
This already happens to a certain extent.
Under another piece of EU legislation, the Digital Services act, these online platforms already have the legal obligation to remove CSAM, but they’re not responsible for detecting it – though many do so voluntarily, with permission from the EU through a special derogation (Temporary Derogation to the ePrivacy Directive) that allows them to bypass normal privacy rules.
So the Commission wants stronger rules, mainly because of the dramatic increase in the spread of CSAM images in recent years.
And I mean that is a fair goal. Its hard to argue against it, because politically, no one wants to be seen as someone who opposes measures to tackle the spread of child sexual abuse.
But, it’s not the goal but the methods that the Commission proposed that led to so much backlash.
Essentially, the rules would oblige providers to detect or scan, report and remove CSAM on their platforms.
This includes private communication, for example messages on Facebook Messenger, WhatsApp or emails.
Providers would be obliged to to evaluate the risk of spreading CSAM, and then set up measures to stop their spread.
If there proves to significant risk, national authorities can issue detection orders to force providers to scan their content, using dedicated software.
While the tools aren’t explicitly mentioned the Commission wants providers to use AI, to scan content massively.
The proposal would also create a new EU Centre that would help member states and authorities, also by collecting, filtering and distributing CSAM reports to national authorities and Europol.
Perhaps the most controversial part is that the Commission wants to enable providers to also scan encrypted environments, effectively rendering encryption useless.
This could even lead to all of our private messages being read.
Criticism of COM Proposal
Understandably, digital rights organisations, journalists, academics and legal experts have all sent open letters criticising the proposal.
But the biggest pushback came from EU institutions themselves.
The European Data Protection Supervisor and the Council of the EU’s Legal Service, which is considered the top notch authority on EU legislation both pushed back against this.
First of all, the scope of the detection and scanning is heavily criticised as disproportionate.
Basically, when a platform is considered at ‘significant risk’ of being used to spread CSAM, the Commission wants to enable providers to scan the communication on an ENTIRE platform such as Messenger or Whatsapp.
This would mean that the communication of not just perpetrators but of ALL users could be scanned.
Since spreading CSAM can never be completely prevented, experts also fear that detection orders would be issued for all of the widely used services, such as WhatsApp, Facebook Messenger, Instagram and emails.
This would result in a ‘general monitoring’ of private communications, consisting of billions of messages.
It also runs into legal problems, since general monitoring or indiscriminate scanning ****is banned by the EU’s own legislation, case law and court rulings which have enshrined the right to privacy.
On top of which the storing of personal data after detection would be at odds with the General Data Protection Regulation, GDPR which was the hallmark privacy legislation the EU passed in 2016.
All these problems could even lead to the proposal being shot down by the European Court of Justice. As the Council of the EU’s legal service stated: there are ‘’serious risk of exceeding the limits of what is appropriate and necessary in order to meet the legitimate objectives pursued, and therefore of failing to comply with the principle of proportionality’’
Technologies/false positives
And the thing is this tool might not be able to reach its goal of detecting and stopping the spread of CSAM.
Let me explain, CSAM images are usually classified in 3 different categories: 1) known CSAM which is known to law enforcement authorities, 2) ‘new material’ which is yet to be detected and classified as CSAM and 3) grooming, or sexual solicitation of children online, such as via chat messages.
Detecting the first kind is straightforward, and there are reliable tools for that. But there will always be an error rate for false positives: images that are automatically detected as child abuse but aren’t.
Like images of children that parents and family members send in the second category, or sexting between teenagers in the third category.
For the technologies for new CSAM and grooming, error rates are way higher, and they are simply not as reliable.
Even with a 1% error rate, billions of messages scanned would result in millions of false positives, flooding authorities with the wrong images, making their work harder.
Encryption
While there are doubts about whether this proposal would actually be effective at tackling the problem, it might come at a very steep cost.
When you’re calling someone on WhatsApp, for example, there is an encryption feature that exists on most services like Messenger and Signal.
Well the Commission proposal wants providers to use technologies to bypass that encryption.
But the Commission proposal would be something like a mail service that would open up your letters and check them before they are sent to the receiver.
This would happen through what’s called ‘client side scanning’. Content on devices would be scanned before it has been encrypted, then reporting whenever illegal material is found.
Current encryption standards are frustrating to say the least for law enforcement authorities.
They argue that criminals freely exchange CSAM and do other illegal activities that they’re unable to investigate or gather evidence.
This is why police chiefs and Europol issued a statement in April this year, calling on governments and industry to ‘‘rollback privacy measures and encryption’’ after Facebook made its Messenger app encrypted.
Adding to the controversy, Apple tried client-side scanning in 2021 but cancelled this practice in less than two weeks due to privacy concerns and risk of abuse.
Used for other purposes
Another concern is that technologies to detect CSAM could end up being used for other purposes. In fact, leaked talks between Europol, the EU Agency for Law Enforcement Cooperation, and Commission officials showed that Europol is very keen to use the new law to investigate private communications for other criminal activities.
In talks with the Commission, a Europol official went as far as requesting ‘unlimited access’ to the data resulting from the detection. ‘‘“All data is useful and should be passed on to law enforcement, there should be no filtering by the [EU] Centre because even an innocent image might contain information that could at some point be useful to law enforcement’’.
On top of that, the legislation would give Big Tech access to more data, which is the exact opposite of EU legislation such as the GDPR and Digital Markets Act, which intended to limit the power of Big Tech.
Ineffectiveness and others
A final criticism is that perpetrators would simply move from known services to the dark web, making the regulation ineffective and law enforcement’s job harder.
Alternative proposals
So there is a lot to criticize about the European Commission’s proposal, but this is only the first proposal.
It still has to be negotiated with the two legislators of the EU: the Council of the EU and the European Parliament, who have ideas of their own, and with which it needs to come to a final compromise.
After heated discussion, the European Parliament presented its own proposal in November 2023. It drastically tuned down the most controversial parts and Parliament wants the measures to be more targeted and specific. Simply put, their proposal is more privacy-friendly and proportionate.
If we go back to the letter metaphor, Parliament wants providers and authorities to only look into letters, when there is suspicion or suspicious activity linked to specific individuals or groups.
And detection must always be issued by a judge. In short, the Parliament rejects general monitoring.
It also rules out detection in encrypted environments, and any breaking or prohibition of encryption. So that letters cannot be opened before sending.
But the law is still being debated among the 27 Member States in the Council of the EU. A ‘general position’ requires a so-called ‘qualified majority’ of at least 55% of the Member States, representing 65% of the EU’s citizens.
For months, there was barely any progress. Countries that are very critical such as Germany, Austria and the Netherlands formed a so-called ‘blocking minority’. Some countries want the proposal to be withdrawn altogether.
The Council has not done much to take away concerns regarding privacy and proportionality. The most important point is that their compromise still suggests detecting in encrypted environments.
If we go back to the letter metaphor, instead of being able to open up your letters and read them, the Council wants to enable authorities to read them before you even send them.
This would happen through an ‘upload filter’ or ‘upload moderation’. Basically, an application on your device would scan images and videos before you send them.
The Council proposal suggests that services with chat functions should make it impossible for their users to send images, videos and links if they do not agree. But this of course completely undermines encryption, as images are already scanned before encryption takes place.
To take away some concern, the Council suggests to classify services into different categories, with only the ‘high risk’ applications needing to take the strongest measures. But the most used applications would all fall under this category, and would risk being scanned massively.
Conclusion
Now that this compromise has not received enough support, it will be up to the upcoming Hungarian Presidency starting on July 1 to come up with a proposal that gathers enough support. If they succeed, long and difficult negotiations with the European Parliament and Commission still need to begin.
So now there are 3 varying proposals, from the 3 main EU legislative institutions, which will need to come together in order to turn that proposal into law.
But whatever changes they make, the perception of ‘chat control’ will be difficult to wear off, especially with the amount of backlash it has triggered and the fact that it is damaging to privacy.
And it is still possible the law could be dropped in its entirety.
But what do you think? Is this legislation worth rescuing to tackle CSAM, or should this proposal be dropped?
Let me know what you think in the comments down below.
Additionally, do let me know if you like this type of content about specific EU policies, and if you would be interested in seeing more videos like this one.
As you’ll notice, this video doesn’t have a sponsor, since talking about Child Abuse Material is not really something sponsors want to be associated with. So if you liked this video, feel free to support Into Europe on Patreon. I recently started a crowdfunding campaign in order to be able to raise €700 a month to rent a space to build a video recording studio, and we are currently at a little more than 25% there! So go check it out in the link down below.