Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Keeping People Safe Online – Fundamental Rights Protective Alternatives to Age Checks

This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.

The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently. 

In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:

  • Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. 

Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.

Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.

Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online. 

Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online. 

But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development. 

At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms. 

Taking a Holistic Approach to Risk Mitigation 

In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach. 

Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content

To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society. 

Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.

The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.

However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA. 

The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates. 

Strengthening Users’ Choice 

Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them. 

Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs. 

The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.

Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain. 

A Privacy First Approach to Addressing Online Harms 

While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms. 

We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data. 

Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections. 

Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled. 

This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online. 

Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.

Age Verification in the European Union: The Commission's Age Verification App

This is the second part of a three-part series about age verification in the European Union. In this blog post, we take a deep dive into the age verification app solicited by the European Commission, based on digital identities. Part one gives an overview of the political debate around age verification in the EU and part three explores measures to keep all users safe that do not require age checks. 

In part one of this series on age verification in the European Union, we gave an overview of the state of the debate in the EU and introduced an age verification app, or mini-wallet, that the European Commission has commissioned. In this post, we will take a more detailed look at the app, how it will work and what some of its shortcomings are.

According to the original tender and the app’s recently published specifications, the Commission is soliciting the creation of a mobile application that will act as a digital wallet by storing a proof of age to enable users to verify their ages and access age-restricted content.

After downloading the app, a user would request proof of their age. For this crucial step, the Commission foresees users relying on a variety of age verification methods, including national eID schemes, physical ID cards, linking the app to another app that contains information about a user’s age, like a banking app, or age assessment through third parties like banks or notaries. 

In the next step, the age verification app would generate a proof of age. Once the user would access a website restricting content for certain age cohorts, the platform would request proof of the user’s age through the app. The app would then present proof of the user’s age via the app, allowing online services to verify the age attestation and the user would then access age-restricted websites or content in question. The goal is to build an app that will be aligned and allows for integration with the architecture of the upcoming EU Digital Identity Wallet

The user journey of the European Commission's age verification app

Review of the Commission’s Specifications for an Age Verification Mini-ID Wallet 

According to the specifications for the app, interoperability, privacy and security are key concerns for the Commission in designing the main requirements of the app. It acknowledges that the development of the app is far from finished, but an interactive process, and that key areas require feedback from stakeholders across industry and civil society. 

The specifications consider important principles to ensure the security and privacy of users verifying their age through the app, including data minimization, unlinkability (to ensure that only the identifiers required for specific linkable transactions are disclosed), storage limitations, transparency and measures to secure user data and prevent the unauthorized interception of personal data. 

However, taking a closer look at the specifications, many of the mechanisms envisioned to protect users’ privacy are not necessary requirements, but optional. For example, the app  should implement salted hashes and Zero Knowledge Proofs (ZKPs), but is not required to do so. Indeed, the app’s specifications seem to heavily rely on ZKPs, while simultaneously acknowledging that no compatible ZKP solution is currently available. This warrants a closer inspection of what ZKPs are and why they may not be the final answer to protecting users’ privacy in the context of age verification. 

A Closer Look at Zero Knowledge Proofs

Zero Knowledge Proofs provide a cryptographic way to not give something away, like your exact date of birth and age, while proving something about it. They can offer a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. Two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and to governments to make it hard for a prover to present forged information. Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information, just the proof that said information exists. This is objectively more secure than uploading a picture of your ID  to multiple sites or applications, but it still requires an initial ID upload process as mentioned above for activation.

This scheme makes several questionable assumptions. First, that frequently used ZKPs will avoid privacy concerns, and second, that verifiers won’t combine this data with existing information, such as account data, profiles, or interests, for other purposes, such as advertising. The European Commission plans to test this assumption with extremely sensitive data: government-issued IDs. Though ZKPs are a better approach, this is a brand new system affecting millions of people, who will be asked to provide an age proof with potentially higher frequency than ever before. This rolls the dice with the resiliency of these privacy measures over time. Furthermore, not all ZKP systems are the same, and while there is  research about its use on mobile devices, this rush to implementation before the research matures puts all of the users at risk.

Who Can Ask for Proof of Your Age?

Regulation on verifiers (the service providers asking for age attestations) and what they can ask for is also just as important to limit a potential flood of verifiers that didn’t previously need age verification. This is especially true for non Know-Your-Customer (KYC) cases, in which service providers are not required to perform due diligence on their users. Equally important are rules that determine the consequences for when verifiers violate those regulations. Up until recently, the eIDAS framework, of which the technical implementation is still being negotiated, required registration certificates across all EU member states for verifiers. By forcing verifiers to register the data categories they intend to ask for, issues like illegal data requests were supposed to be mitigated. But now, this requirement has been rolled back again and the Commission’s planned mini-AV wallet will not require it in the beginning. Users will be asked to prove how old they are without the restraint on verifiers that protects from request abuse. Without verifier accountability, or at least industry-level data categories being given a determined scope, users are being asked to enter into an imbalanced relationship. An earlier mock-up gave some hope for  empowered selective disclosure, where a user could toggle giving discrete information on and off during the time of the verifier request. It would be more proactive to provide that setting to the holder in their wallet settings, before a request is made from a relying party.

Privacy tech is offered in this system as a concession to users forced to share information even more frequently, rather than as an additional way to bring equity in existing interactions with those who hold power, through mediating access to information, loans, jobs, and public benefits. Words mean things, and ZKPs are not the solution, but a part of one. Most ZKP systems are more focused on making proof and verification time more efficient than they are concerned with privacy itself. The result of the latest research with digital credentials are more privacy oriented ways to share information. But at this scale, we will need regulation and added measures on aggressive verification to complete the promise of better privacy for eID use.

Who Will Have Access to the Mini-ID Wallet, and Who Will Be Left Out?

Beyond its technical specifications, the proposed app raises a number of accessibility and participation issues. At its heart, the mini-ID wallet will rely on the verification of a user’s age through a proof of age. According to the tender, the wallet should support four methods for the issuance and proving of age of a user.

Different age verification methods foreseen by the app

The first options are national eID schemes, which is an obvious choice: Many Member States are currently working on (or have already notified) national eID schemes in the context of the eIDAS, Europe’s eID framework. The goal is to allow the mini-ID wallet to integrate with the eIDAS node operated by the European Commission to verify a user’s age. Although many Member States are working on national eID schemes, previous uptake of eIDs has been reluctant, and it's questionable whether an EU-wide rollout of eIDs will be successful. 

But even if an EU-wide roll out was achievable, many will not be able to participate. Those who are not in possession of ID cards, passports, residence permits, or documents like birth certificates will not be able to attain an eID and will be at risk of losing access to knowledge, information, and services. This is especially relevant for already marginalized groups like refugees or unhoused people who may lose access to critical resources. But also many children and teenagers will not be able to participate in eID schemes. There are no EU-wide rules on when children need to have government-issued IDs, and while some countries, like Germany, mandate that every citizen above the age of 16 possess an ID, others, like Sweden, don’t require their citizens to have an ID or passport. In most EU Member States, the minimum age at which children can apply for an ID without parental consent is 18. So even in cases where children and teenagers may have a legal option to get an ID, their parents might withhold consent, thereby making it impossible for a child to verify their age in order to access information or services online.

The second option are so-called smartcards, or physical eID cards, such as national ID cards, e-passports or other trustworthy physical eID cards. The same limitations as for eIDs apply. Additionally, the Commission’s tender suggests the mini-ID wallet will rely on biometric recognition software to compare a user to the physical ID card they are using to verify their age. This leads to a host of questions regarding the processing and storing of sensitive biometric data. A recent study by the National Institute of Standards and Technology compared different age estimation algorithms based on biometric data and found that certain ethnicities are still underrepresented in training data sets, thus exacerbating the risk age estimation systems of discriminating against people of color. The study also reports higher error rates for female faces compared to male faces and that overall accuracy is strongly influenced by factors people have no control over, including “sex, image quality, region-of-birth, age itself, and interactions between those factors.” Other studies on the accuracy of biometric recognition software have reported higher error rates for people with disabilities as well as trans and non-binary people

The third option foresees a procedure to allow for the verification of a user’s identity through institutions like a bank, a notary, or a citizen service center. It is encouraging that the Commission’s tender foresees an option for different, non-state institutions to verify a user’s age. But neither banks nor notary offices are especially accessible for people who are undocumented, unhoused, don’t speak a Member State’s official language, or are otherwise marginalized or discriminated against. Banks and notaries also often require a physical ID in order to verify a client’s identity, so the fundamental access issues outlined above persist.

Finally, the specification suggests that third party apps that already have verified a user's identity, like banking apps or mobile network operators, could provide age verification signals. In many European countries, however, showing an ID is a necessary prerequisite for opening a bank account, setting up a phone contract, or even buying a SIM card. 

In summary, none of the options the Commission considers to allow for proving someone’s age accounts for the obstacles faced by different marginalized groups, leaving potentially millions of people across the EU unable to access crucial services and information, thereby undermining their fundamental rights. 

The question of which institutions will be able to verify ages is only one dimension when considering the ramification of approaches like the mini-ID wallet for accessibility and participation. Although often forgotten in policy discussions, not everyone has access to a personal device. Age verification methods like the mini-ID wallet, which are device dependent, can be a real obstacle to people who share devices, or users who access the internet through libraries, schools, or internet cafés, which do not accommodate the use of personal age verification apps. The average number of devices per household has been  found to correlate strongly with income and education levels, further underscoring the point that it is often those who are already on the margins of society who are at risk of being left behind by age verification mandates based on digital identities. 

This is why we need to push back against age verification mandates. Not because child safety is not a concern – it is. But because age verification mandates risk undermining crucial access to digital services, eroding privacy and data protection, and limiting the freedom of expression. Instead, we must ensure that the internet remains a space where all voices can be heard, free from discrimination, and where we do not have to share sensitive personal data to access information and connect with each other.

Digital Identities and the Future of Age Verification in Europe

This is the first part of a three-part series about age verification in the European Union. In this blog post, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

As governments across the world pass laws to “keep children safe online,” more times than not, notions of safety rest on the ability of platforms, websites, and online entities being able to discern users by age. This legislative trend has also arrived in the European Union, where online child safety is becoming one of the issues that will define European tech policy for years to come. 

Like many policymakers elsewhere, European regulators are increasingly focused on a range of online harms they believe are associated with online platforms, such as compulsive design and the effects of social media consumption on children’s and teenagers’ mental health. Many of these concerns lack robust scientific evidence; studies have drawn a far more complex and nuanced picture about how social media and young people’s mental health interact. Still, calls for mandatory age verification have become as ubiquitous as they have become trendy. Heads of state in France and Denmark have recently called for banning under 15 year olds from social media Europe-wide, while Germany, Greece and Spain are working on their own age verification pilots. 

EFF has been fighting age verification mandates because they undermine the free expression rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security. We do not think that requiring service providers to verify users’ age is the right approach to protecting people online. 

Policy makers frame age verification as a necessary tool to prevent children from accessing content deemed unsuitable, to be able to design online services appropriate for children and teenagers, and to enable minors to participate online in age appropriate ways. Rarely is it acknowledged that age verification undermines the privacy and free expression rights of all users, routinely blocks access to resources that can be life saving, and undermines the development of media literacy. Rare, too, are critical conversations about the specific rights of young users: The UN Convention on the Rights of the Child clearly expresses that minors have rights to freedom of expression and access to information online, as well as the right to privacy. These rights are reflected in the European Charter of Fundamental Rights, which establishes the rights to privacy, data protection and free expression for all European citizens, including children. These rights would be steamrolled by age verification requirements. And rarer still are policy discussions of ways to improve these rights for young people.

Implicitly Mandatory Age Verification

Currently, there is no legal obligation to verify users’ age in the EU. However, different European legal acts that recently entered into force or are being discussed implicitly require providers to know users’ ages or suggest age assessments as a measure to mitigate risks for minors online. At EFF, we consider these proposals akin to mandates because there is often no alternative method to comply except to introduce age verification. 

Under the General Data Protection Regulation (GDPR), in practice, providers will often need to implement some form of age verification or age assurance (depending on the type of service and risks involved): Article 8 stipulates that the processing of personal data of children under the age of 16 requires parental consent. Thus, service providers are implicitly required to make reasonable efforts to assess users’ ages – although the law doesn’t specify what “reasonable efforts” entails. 

Another example is the child safety article (Article 28) of the Digital Services Act (DSA), the EU’s recently adopted new legal framework for online platforms. It requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. The article also prohibits targeting minors with personalized ads. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy, and taking measures to protect minors specifically, but it's presently unclear which measures providers must take to comply with these obligations. Recital 71 of the DSA states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. The European Commission is currently working on guidelines for the implementation of Article 28 and may come up with criteria for what they believe would be effective and privacy-preserving age verification. 

The DSA does explicitly name age verification as one measure the largest platforms – so called Very Large Online Platforms (VLOPs) that have more than 45 million monthly users in the EU – can choose to mitigate systemic risks related to their services. Those risks, while poorly defined, include negative impacts on the protection of minors and users’ physical and mental wellbeing. While this is also not an explicit obligation, the European Commission seems to expect adult content platforms to adopt age verification to comply with their risk mitigation obligations under the DSA. 

Adding another layer of complexity, age verification is a major element of the dangerous European Commission proposal to fight child sexual abuse material through mandatory scanning of private and encrypted communication. While the negotiations of this bill have largely stalled, the Commission’s original proposal puts an obligation on app stores and interpersonal communication services (think messaging apps or email) to implement age verification. While the European Parliament has followed the advice of civil society organizations and experts and has rejected the notion of mandatory age verification in its position on the proposal, the Council, the institution representing member states, is still considering mandatory age verification. 

Digital Identities and Age Verification 

Leaving aside the various policy work streams that implicitly or explicitly consider whether age verification should be introduced across the EU, the European Commission seems to have decided on the how: Digital identities.

In 2024, the EU adopted the updated version of the so-called eIDAS Regulation, which sets out a legal framework for digital identities and authentication in Europe. Member States are now working on national identity wallets, with the goal of rolling out digital identities across the EU by 2026.

Despite the imminent roll out of digital identities in 2026, which could facilitate age verification, the European Commission clearly felt pressure to act sooner than that. That’s why, in the fall of 2024, the Commission published a tender for a “mini-ID wallet”, offering four million euros in exchange for the development of an “age verification solution” by the second quarter of 2025 to appease Member States anxious to introduce age verification today. 

Favoring digital identities for age verification follows an overarching trend to push obligations to conduct age assessments continuously further down in the stack – from apps to app stores to operating service providers. Dealing with age verification at the app store, device, or operating system level is also a demand long made by providers of social media and dating apps seeking to avoid liability for insufficient age verification. Embedding age verification at the device level will make it more ubiquitous and harder to avoid. This is a dangerous direction; digital identity systems raise serious concerns about privacy and equity.

This approach will likely also lead to mission creep: While the Commission limits its tender to age verification for 18+ services (specifically adult content websites), it is made abundantly clear that once available, age verification could be extended to “allow age-appropriate access whatever the age-restriction (13 or over, 16 or over, 65 or over, under 18 etc)”. Extending age verification is even more likely when digital identity wallets don’t come in the shape of an app, but are baked into operating systems. 

In the next post of this series, we will be taking a closer look at the age verification app the European Commission has been working on.

EFF’s Reflections from RightsCon 2025 

EFF was delighted to once again attend RightsCon—this year hosted in Taipei, Taiwan between 24-27 February. As with previous years, RightsCon provided an invaluable opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

For some attending from EFF, this was the first RightsCon. For others, their 10th or 11th. But for all, one message was spoken loud and clear: the need to collectivize digital rights in the face of growing authoritarian governments and leaders occupying positions of power around the globe, as well as Big Tech’s creation and provision of consumer technologies for use in rights-abusing ways. 

EFF hosted a multitude of sessions, and appeared on many more panels—from a global perspective on platform accountability frameworks, to the perverse gears supporting transnational repression, and exploring tech tools for queer liberation online. Here we share some of our highlights.

Major Concerns Around Funding Cuts to Civil Society 

Two major shifts affecting the digital rights space underlined the renewed need for solidarity and collective responses. First, the Trump administration’s summary (and largely illegal) funding cuts for the global digital rights movement from USAID, the State Department, the National Endowment for Democracy and other programs, are impacting many digital rights organizations across the globe and deeply harming the field. By some estimates, U.S. government cuts, along with major changes in the Netherlands and elsewhere, will result in a 30% reduction in the size of the global digital rights community, especially in global majority countries. 

Second, the Trump administration’s announcement to respond to the regulation of U.S. tech companies with tariffs has thrown another wrench into the work of many of us working towards improved tech accountability. 

We know that attacks on civil society, especially on funding, are a go-to strategy for authoritarian rulers, so this is deeply troubling. Even in more democratic settings, this reinforces the shrinking of civic space hindering our collective ability to organize and fight for better futures. Given the size of the cuts, it’s clear that other funders will struggle to counterbalance the dwindling U.S. public funding, but they must try. We urge other countries and regions, as well as individuals and a broader range of philanthropy, to step up to ensure that the crucial work defending human rights online will be able to continue. 

Community Solidarity with Alaa Abd El-Fattah and Laila Soueif

The call to free Alaa Abd El-Fattah from illegal detention in Egypt was a prominent message heard throughout RightsCon. During the opening ceremony, Access Now’s new Executive Director, Alejandro Mayoral, talked about Alaa’s keynote speech at the very first RightsCon and stated: “We stand in solidarity with him and all civil society actors, activists, and journalists whose governments are silencing them.” The opening ceremony also included a video address from Alaa’s mother, Laila Soueif, in which she urged viewers to “not let our defeat be permanent.” Sadly, immediately after that address Ms. Soueif was admitted to the hospital as a result of her longstanding hunger strike in support of her son. 

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at UK-based digital rights group Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

EFF and Open Rights Group also hosted a session talking about Alaa, his work as a blogger, coder, and activist for more than two decades. The session included a reading from Alaa’s book and a discussion with participants on strategies.

Platform Accountability in Crisis

Online platforms like Facebook and services like Google are crucial spaces for civic discourse and access to information. Many sessions at RightsCon were dedicated to the growing concern that these platforms have also become powerful tools for political manipulation, censorship, and control. With the return of the Trump administration, Facebook’s shift in hate speech policies, and the growing geo-politicization of digital governance, many now consider platform accountability being in crisis. 

A dedicated “Day 0” event, co-organized by Access Now and EFF, set the stage of these discussions with a high-level panel reflecting on alarming developments in platform content policies and enforcement. Reflecting on Access Now’s “rule of law checklist,” speakers stressed how a small group of powerful individuals increasingly dictate how platforms operate, raising concerns about democratic resilience and accountability. They also highlighted the need for deeper collaboration with global majority countries on digital governance, taking into account diverse regional challenges. Beyond regulation, the conversation discussed the potential of user-empowered alternatives, such as decentralized services, to counter platform dominance and offer more sustainable governance models.

A key point of attention was the EU’s Digital Services Act (DSA), a rulebook with the potential to shape global responses to platform accountability but one that also leaves many crucial questions open. The conversation naturally transitioned to the workshop organized by the DSA Human Rights Alliance, which focused more specifically on the global implications of DSA enforcement and how principles for a “Human Rights-Centered Application of the DSA” could foster public interest and collaboration.

Fighting Internet Shutdowns and Anti-Censorship Tools

Many sessions discussed internet shutdowns and other forms of internet blocking impacted the daily lives of people under extremely oppressive regimes. The overwhelming conclusion was that we need encryption to remain strong in countries with better conditions of democracy in order to continue to bridge access to services in places where democracy is weak. Breaking encryption or blocking important tools for “national security,” elections, exams, protests, or for law enforcement only endangers freedom of information for those with less political power. In turn, these actions empower governments to take possibly inhumane actions while the “lights are out” and people can’t tell the rest of the world what is happening to them.

Another pertinent point coming out of RightsCon was that anti-censorship tools work best when everyone is using them. Diversity of users not only helps to create bridges for others who can’t access the internet through normal means, but it also helps to create traffic that looks innocuous enough to bypass censorship blockers. Discussions highlighted how the more tools we have to connect people without unique traffic, the less chances there are for government censorship technology to keep their traffic from going through. We know some governments are not above completely shutting down internet access. But in cases where they still allow the internet, user diversity is key. It also helps to move away from narratives that imply “only criminals” use encryption. Encryption is for everyone, and everyone should use it. Because tomorrow’s internet could be tested by future threats.

Palestine: Human Rights in Times of Conflict

At this years RightsCon, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by the Israeli authorities. Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services—preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. 

On another panel, EFF raised concerns to Microsoft representatives about an AP report that emerged just prior to Rightscon about the company providing services to the Israeli Defense Forces that are being used as part of the repression of Palestinians in Gaza as well as in the bombings in Lebanon. We noted that Microsoft’s pledges to support human rights seemed to be in conflict with this, something EFF has already raised about Google and Amazon and their work on Project Nimbus.  Microsoft promised to look into that allegation, as well as one about its provision of services to Saudi Arabia. 

In the RightsCon opening ceremony, Alejandro Mayoral noted that: “Today, the world’s eyes are on Gaza, where genocide has taken place, AI is being weaponized, and people’s voices are silenced as the first phase of the fragile Palestinian-Israeli ceasefire is realized.” He followed up by saying, “We are surrounded by conflict. Palestine, Sudan, Myanmar, Ukraine, and beyond…where the internet and technology are being used and abused at the cost of human lives.” Following this keynote, Access Now’s MENA Policy and Advocacy Director, Marwa Fatafta, hosted a roundtable to discuss technology in times of conflict, where takeaways included the reminder that “there is no greater microcosm of the world’s digital rights violations happening in our world today than in Gaza. It’s a laboratory where the most invasive and deadly technologies are being tested and deployed on a besieged population.”

Countering Cross-Border Arbitrary Surveillance and Transnational Repression

Concerns about ongoing legal instruments that can be misused to expand transnational repression were also front-and-center at RightsCon. During a Citizen Lab-hosted session we participated in, participants examined how cross-border policing can become a tool to criminalize marginalized groups, the economic incentives driving these criminalization trends, and the urgent need for robust, concrete, and enforceable international human rights safeguards. They also noted that the newly approved UN Cybercrime Convention, with only minimal protections, adds yet another mechanism for broadening cross-border surveillance powers, thereby compounding the proliferation of legal frameworks that lack adequate guardrails against misuse.

Age-Gating the Internet

EFF co-hosted a roundtable session to workshop a human rights statement addressing government mandates to restrict young people’s access to online services and specific legal online speech. Participants in the roundtable represented five continents and included representatives from civil society and academia, some of whom focused on digital rights and some on childrens’ rights. Many of the participants will continue to refine the statement in the coming months.

Hard Conversations

EFF participated in a cybersecurity conversation with representatives of the UK government, where we raised serious concerns about the government’s hostility to strong encryption, and the resulting insecurity they had created for both UK citizens and the people who communicate with them by pressuring Apple to ensure UK law enforcement access to all communications. 

Equity and Inclusion in Platform Discussions, Policies, and Trust & Safety

The platform economy is an evergreen RightsCon topic, and this year was no different, with conversations ranging from the impact of content moderation on free expression to transparency in monetization policies, and much in between. Given the recent developments at Meta, X, and elsewhere, many participants were rightfully eager to engage.

EFF co-organized an informal meetup of global content moderation experts with whom we regularly convene, and participated in a number of sessions, such as on the decline of user agency on platforms in the face of growing centralized services, as well as ways to expand choice through decentralized services and platforms. One notable session on this topic was hosted by the Center for Democracy and Technology on addressing global inequities in content moderation, in which speakers presented findings from their research on the moderation by various platforms of content in Maghrebi Arabic and Kiswahili, as well as a forthcoming paper on Quechua.

Reflections and Next Steps

RightsCon is a conference that reminds us of the size and scope of the digital rights movement around the world. Holding it in Taiwan and in the wake of the huge cuts to funding for so many created an urgency that was palpable across the spectrum of sessions and events. We know that we’ve built a robust community and that can weather the storms, and in the face of overwhelming pressure from government and corporate actors, it's essential that we resist the temptation to isolate in the face of threats and challenges but instead continue to push forward with collectivisation and collaboration to continue speaking truth to power, from the U.S. to Germany, and across the globe.

Systemic Risk Reporting: A System in Crisis?

The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.  

In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.  

A Short DSA Explainer  

The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.   

For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.  

While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:  

  • We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.  
  • We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and  
  • We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic. 

Introducing Systemic Risks 

The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.  

The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.   

Risk Management Through the Lens of the Romanian Election 

Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.  

To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.” 

TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approachespotentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.  

In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.  

Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations. 

Limitations of the DSA’s Risk Governance Approach 

What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.  

As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.  

Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process. 

Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.  

The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.  

The Way Forward  

The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.  

Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is neededone that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.  

Saving the Internet in Europe: Defending Free Expression

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Germany Rushes to Expand Biometric Surveillance

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

❌