Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

EFF’s Reflections from RightsCon 2025 

EFF was delighted to once again attend RightsCon—this year hosted in Taipei, Taiwan between 24-27 February. As with previous years, RightsCon provided an invaluable opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

For some attending from EFF, this was the first RightsCon. For others, their 10th or 11th. But for all, one message was spoken loud and clear: the need to collectivize digital rights in the face of growing authoritarian governments and leaders occupying positions of power around the globe, as well as Big Tech’s creation and provision of consumer technologies for use in rights-abusing ways. 

EFF hosted a multitude of sessions, and appeared on many more panels—from a global perspective on platform accountability frameworks, to the perverse gears supporting transnational repression, and exploring tech tools for queer liberation online. Here we share some of our highlights.

Major Concerns Around Funding Cuts to Civil Society 

Two major shifts affecting the digital rights space underlined the renewed need for solidarity and collective responses. First, the Trump administration’s summary (and largely illegal) funding cuts for the global digital rights movement from USAID, the State Department, the National Endowment for Democracy and other programs, are impacting many digital rights organizations across the globe and deeply harming the field. By some estimates, U.S. government cuts, along with major changes in the Netherlands and elsewhere, will result in a 30% reduction in the size of the global digital rights community, especially in global majority countries. 

Second, the Trump administration’s announcement to respond to the regulation of U.S. tech companies with tariffs has thrown another wrench into the work of many of us working towards improved tech accountability. 

We know that attacks on civil society, especially on funding, are a go-to strategy for authoritarian rulers, so this is deeply troubling. Even in more democratic settings, this reinforces the shrinking of civic space hindering our collective ability to organize and fight for better futures. Given the size of the cuts, it’s clear that other funders will struggle to counterbalance the dwindling U.S. public funding, but they must try. We urge other countries and regions, as well as individuals and a broader range of philanthropy, to step up to ensure that the crucial work defending human rights online will be able to continue. 

Community Solidarity with Alaa Abd El-Fattah and Laila Soueif

The call to free Alaa Abd El-Fattah from illegal detention in Egypt was a prominent message heard throughout RightsCon. During the opening ceremony, Access Now’s new Executive Director, Alejandro Mayoral, talked about Alaa’s keynote speech at the very first RightsCon and stated: “We stand in solidarity with him and all civil society actors, activists, and journalists whose governments are silencing them.” The opening ceremony also included a video address from Alaa’s mother, Laila Soueif, in which she urged viewers to “not let our defeat be permanent.” Sadly, immediately after that address Ms. Soueif was admitted to the hospital as a result of her longstanding hunger strike in support of her son. 

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at UK-based digital rights group Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

EFF and Open Rights Group also hosted a session talking about Alaa, his work as a blogger, coder, and activist for more than two decades. The session included a reading from Alaa’s book and a discussion with participants on strategies.

Platform Accountability in Crisis

Online platforms like Facebook and services like Google are crucial spaces for civic discourse and access to information. Many sessions at RightsCon were dedicated to the growing concern that these platforms have also become powerful tools for political manipulation, censorship, and control. With the return of the Trump administration, Facebook’s shift in hate speech policies, and the growing geo-politicization of digital governance, many now consider platform accountability being in crisis. 

A dedicated “Day 0” event, co-organized by Access Now and EFF, set the stage of these discussions with a high-level panel reflecting on alarming developments in platform content policies and enforcement. Reflecting on Access Now’s “rule of law checklist,” speakers stressed how a small group of powerful individuals increasingly dictate how platforms operate, raising concerns about democratic resilience and accountability. They also highlighted the need for deeper collaboration with global majority countries on digital governance, taking into account diverse regional challenges. Beyond regulation, the conversation discussed the potential of user-empowered alternatives, such as decentralized services, to counter platform dominance and offer more sustainable governance models.

A key point of attention was the EU’s Digital Services Act (DSA), a rulebook with the potential to shape global responses to platform accountability but one that also leaves many crucial questions open. The conversation naturally transitioned to the workshop organized by the DSA Human Rights Alliance, which focused more specifically on the global implications of DSA enforcement and how principles for a “Human Rights-Centered Application of the DSA” could foster public interest and collaboration.

Fighting Internet Shutdowns and Anti-Censorship Tools

Many sessions discussed internet shutdowns and other forms of internet blocking impacted the daily lives of people under extremely oppressive regimes. The overwhelming conclusion was that we need encryption to remain strong in countries with better conditions of democracy in order to continue to bridge access to services in places where democracy is weak. Breaking encryption or blocking important tools for “national security,” elections, exams, protests, or for law enforcement only endangers freedom of information for those with less political power. In turn, these actions empower governments to take possibly inhumane actions while the “lights are out” and people can’t tell the rest of the world what is happening to them.

Another pertinent point coming out of RightsCon was that anti-censorship tools work best when everyone is using them. Diversity of users not only helps to create bridges for others who can’t access the internet through normal means, but it also helps to create traffic that looks innocuous enough to bypass censorship blockers. Discussions highlighted how the more tools we have to connect people without unique traffic, the less chances there are for government censorship technology to keep their traffic from going through. We know some governments are not above completely shutting down internet access. But in cases where they still allow the internet, user diversity is key. It also helps to move away from narratives that imply “only criminals” use encryption. Encryption is for everyone, and everyone should use it. Because tomorrow’s internet could be tested by future threats.

Palestine: Human Rights in Times of Conflict

At this years RightsCon, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by the Israeli authorities. Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services—preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. 

On another panel, EFF raised concerns to Microsoft representatives about an AP report that emerged just prior to Rightscon about the company providing services to the Israeli Defense Forces that are being used as part of the repression of Palestinians in Gaza as well as in the bombings in Lebanon. We noted that Microsoft’s pledges to support human rights seemed to be in conflict with this, something EFF has already raised about Google and Amazon and their work on Project Nimbus.  Microsoft promised to look into that allegation, as well as one about its provision of services to Saudi Arabia. 

In the RightsCon opening ceremony, Alejandro Mayoral noted that: “Today, the world’s eyes are on Gaza, where genocide has taken place, AI is being weaponized, and people’s voices are silenced as the first phase of the fragile Palestinian-Israeli ceasefire is realized.” He followed up by saying, “We are surrounded by conflict. Palestine, Sudan, Myanmar, Ukraine, and beyond…where the internet and technology are being used and abused at the cost of human lives.” Following this keynote, Access Now’s MENA Policy and Advocacy Director, Marwa Fatafta, hosted a roundtable to discuss technology in times of conflict, where takeaways included the reminder that “there is no greater microcosm of the world’s digital rights violations happening in our world today than in Gaza. It’s a laboratory where the most invasive and deadly technologies are being tested and deployed on a besieged population.”

Countering Cross-Border Arbitrary Surveillance and Transnational Repression

Concerns about ongoing legal instruments that can be misused to expand transnational repression were also front-and-center at RightsCon. During a Citizen Lab-hosted session we participated in, participants examined how cross-border policing can become a tool to criminalize marginalized groups, the economic incentives driving these criminalization trends, and the urgent need for robust, concrete, and enforceable international human rights safeguards. They also noted that the newly approved UN Cybercrime Convention, with only minimal protections, adds yet another mechanism for broadening cross-border surveillance powers, thereby compounding the proliferation of legal frameworks that lack adequate guardrails against misuse.

Age-Gating the Internet

EFF co-hosted a roundtable session to workshop a human rights statement addressing government mandates to restrict young people’s access to online services and specific legal online speech. Participants in the roundtable represented five continents and included representatives from civil society and academia, some of whom focused on digital rights and some on childrens’ rights. Many of the participants will continue to refine the statement in the coming months.

Hard Conversations

EFF participated in a cybersecurity conversation with representatives of the UK government, where we raised serious concerns about the government’s hostility to strong encryption, and the resulting insecurity they had created for both UK citizens and the people who communicate with them by pressuring Apple to ensure UK law enforcement access to all communications. 

Equity and Inclusion in Platform Discussions, Policies, and Trust & Safety

The platform economy is an evergreen RightsCon topic, and this year was no different, with conversations ranging from the impact of content moderation on free expression to transparency in monetization policies, and much in between. Given the recent developments at Meta, X, and elsewhere, many participants were rightfully eager to engage.

EFF co-organized an informal meetup of global content moderation experts with whom we regularly convene, and participated in a number of sessions, such as on the decline of user agency on platforms in the face of growing centralized services, as well as ways to expand choice through decentralized services and platforms. One notable session on this topic was hosted by the Center for Democracy and Technology on addressing global inequities in content moderation, in which speakers presented findings from their research on the moderation by various platforms of content in Maghrebi Arabic and Kiswahili, as well as a forthcoming paper on Quechua.

Reflections and Next Steps

RightsCon is a conference that reminds us of the size and scope of the digital rights movement around the world. Holding it in Taiwan and in the wake of the huge cuts to funding for so many created an urgency that was palpable across the spectrum of sessions and events. We know that we’ve built a robust community and that can weather the storms, and in the face of overwhelming pressure from government and corporate actors, it's essential that we resist the temptation to isolate in the face of threats and challenges but instead continue to push forward with collectivisation and collaboration to continue speaking truth to power, from the U.S. to Germany, and across the globe.

Systemic Risk Reporting: A System in Crisis?

The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.  

In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.  

A Short DSA Explainer  

The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.   

For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.  

While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:  

  • We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.  
  • We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and  
  • We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic. 

Introducing Systemic Risks 

The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.  

The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.   

Risk Management Through the Lens of the Romanian Election 

Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.  

To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.” 

TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approachespotentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.  

In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.  

Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations. 

Limitations of the DSA’s Risk Governance Approach 

What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.  

As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.  

Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process. 

Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.  

The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.  

The Way Forward  

The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.  

Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is neededone that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.  

Saving the Internet in Europe: Defending Free Expression

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Germany Rushes to Expand Biometric Surveillance

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

❌