Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 24 mai 2025Flux principal

The Kids Online Safety Act Will Make the Internet Worse for Everyone

Par : Joe Mullin
15 mai 2025 à 14:00

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

Keeping People Safe Online – Fundamental Rights Protective Alternatives to Age Checks

This is the final part of a three-part series about age verification in the European Union. In part one, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

When thinking about the safety of young people online, it is helpful to remember that we can build on and learn from the decades of experience we already have thinking through risks that can stem from content online. Before mandating a “fix,” like age checks or age assurance obligations, we should take the time to reflect on what it is exactly we are trying to address, and whether the proposed solution is able to solve the problem.

The approach of analyzing, defining and mitigating risks is a helpful one in this regard as it allows us to take a holistic look at possible risks, which includes thinking about the likelihood of a risk materializing, the severity of a certain risk and how risks may affect different groups of people very differently. 

In the context of child safety online, mandatory age checks are often presented as a solution to a number of risks potentially faced by minors online. The most common concerns to which policymakers refer in the context of age checks can be broken down into three categories of risks:

  • Content risks: This refers to the negative implications from the exposure to online content that might be age-inappropriate, such as violent or sexually explicit content, or content that incites dangerous behavior like self-harm. 
  • Conduct risks: Conduct risks involve behavior by children or teenagers that might be harmful to themselves or others, like cyberbullying, sharing intimate or personal information or problematic overuse of a service.
  • Contact risks: This includes potential harms stemming from contact with people that might pose a risk to minors, including grooming or being forced to exchange sexually explicit material. 

Taking a closer look at these risk categories, we can see that mandatory age checks are an ineffective and disproportionate tool to mitigate many risks at the top of policymakers’ minds.

Mitigating risks stemming from contact between minors and adults usually means ensuring that adults are barred from spaces designated for children. Age checks, especially age verification depending on ID documents like the European Commission’s mini-ID wallet, are not a helpful tool in this regard as children routinely do not have access to the kind of documentation allowing them to prove their age. Adults with bad intentions, on the other hand, are much more likely to be able to circumvent any measures put in place to keep them out.

Conduct risks have little to do with how old a specific user is, and much more to do with social dynamics and the affordances and constraints of online services. Differently put: Whether a platform knows a user’s age will not change how minor users themselves decide to behave and interact on the platform. Age verification won’t prevent users from choosing to engage in harmful or risky behavior, like freely posting personal information or spending too much time online. 

Finally, mitigating risks related to content deemed inappropriate is often thought of as shutting minors out from accessing certain information. Age check mandates seek to limit access to services and content without much granularity. They don’t allow for a nuanced weighing of the ways in which accessing the internet and social media can be a net positive for young people, and the ways in which it can lead to harm. This is complicated by the fact that although arguments in favour of age checks claim that the science on the relationship between the internet and young people is clear, the evidence on the effects of social media on minors is unsettled, and researchers have refuted claims that social media use is responsible for wellbeing crises among teenagers. This doesn’t mean that we shouldn’t consider the risks that may be associated with being young and online. 

But it’s clear that banning all access to certain information for an entire age cohort interferes with all users’ fundamental rights, and is therefore not a proportionate risk mitigation strategy. Under a mandatory age check regime, adults are also required to upload identifying documents just to access websites, interfering with their speech, privacy and security online. At the same time, age checks are not even effective at accomplishing what they’re intended to achieve. Assuming that all age check mandates can and will be circumvented, they seem to do little in the way of protecting children but rather undermine their fundamental rights to privacy, freedom of expression and access to information crucial for their development. 

At EFF, we have been firm in our advocacy against age verification mandates and often get asked what we think policymakers should do instead to protect users online. Our response is a nuanced one, recognizing that there is no easy technological fix for complex, societal challenges: Take a holistic approach to risk mitigation, strengthen user choice, and adopt a privacy-first approach to fighting online harms. 

Taking a Holistic Approach to Risk Mitigation 

In the European Union, the past years have seen the adoption of a number of landmark laws to regulate online services. With new rules such as the Digital Services Act or the AI Act, lawmakers are increasingly pivoting to risk-based approaches to regulate online services, attempting to square the circle by addressing known cases of harm while also providing a framework for dealing with possible future risks. It remains to be seen how risk mitigation will work out in practice and whether enforcement will genuinely uphold fundamental rights without enabling overreach. 

Under the Digital Services Act, this framework also encompasses rights-protective moderation of content relevant to the risks faced by young people using their services. Platforms may also come up with their own policies on how to moderate legal content that may be considered harmful, such as hate speech or violent content. Robust enforcement of their own community guidelines is one of the most important tools at the disposal of online platforms, but unfortunately often lacking – also for categories of content harmful to children and teenagers, like pro-anorexia content

To counterbalance potential negative implications on users’ rights to free expression, the DSA puts boundaries on platforms’ content moderation: Platforms must act objectively and proportionately and must take users’ fundamental rights into account when restricting access to content. Additionally, users have the right to appeal content moderation decisions and can ask platforms to review content moderation decisions they disagree with. Users can also seek resolution through out-of-court dispute settlement bodies, at no cost, and can ask nonprofits to represent them in the platform’s internal dispute resolution process, in out-of-court dispute settlements and in court. Platforms must also publish detailed transparency reports, and give researchers and non-profits access to data to study the impacts of online platforms on society. 

Beyond these specific obligations on platforms regarding content moderation, the protection of user rights, and improving transparency, the DSA obliges online platforms to take appropriate and proportionate measures to protect the privacy, security and safety of minors. Upcoming guidelines will hopefully provide more clarity on what this means in practice, but it is clear that there are a host of measures platforms can adopt before resorting to approaches as disproportionate as age verification.

The DSA also foresees obligations on the largest platforms and search engines – so called Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) that have more than 45 million monthly users in the EU – to analyze and mitigate so-called systemic risks posed by their services. This includes analyzing and mitigating risks to the protection of minors and the rights of the child, including freedom of expression and access to information. While we have some critiques of the DSA’s systemic risk governance approach, it is helpful for thinking through the actual risks for young people that may be associated with different categories of content, platforms and their functionalities.

However, it is crucial that such risk assessments are not treated as mere regulatory compliance exercises, but put fundamental rights – and the impact of platforms and their features on those rights – front and center, especially in relation to the rights of children. Platforms would be well-advised to use risk assessments responsibly for their regular product and policy assessments when mitigating risks stemming from content, design choices or features, like recommender systems, ways of engaging with content and users and or online ads. Especially when it comes to possible negative and positive effects of these features on children and teenagers, such assessments should be frequent and granular, expanding the evidence base available to both platforms and regulators. Additionally, platforms should allow external researchers to challenge and validate their assumptions and should provide extensive access to research data, as mandated by the DSA. 

The regulatory framework to deal with potentially harmful content and protect minors in the EU is a new and complex one, and enforcement is still in its early days. We believe that its robust, rights-respecting enforcement should be prioritized before eyeing new rules and legal mandates. 

Strengthening Users’ Choice 

Many online platforms also deploy their own tools to help families navigate their services, including parental control settings and apps, specific offers tailored to the needs of children and teens, or features like reminders to take a break. While these tools are certainly far from perfect, and should not be seen as a sufficient measure to address all concerns, they do offer families an opportunity to set boundaries that work for them. 

Academic and civil society research underlines that better and more granular user controls can also be an effective tool to minimize content and contact risks: Allowing users to integrate third-party content moderation systems or recommendation algorithms would enable families to alter their childrens’ online experiences according to their needs. 

The DSA takes a first helpful step in this direction by mandating that online platforms give users transparency about the main parameters used to recommend content to users, and to allow users to easily choose between different recommendation systems when multiple options are available. The DSA also obliges VLOPs that use recommender systems to offer at least one option that is not based on profiling users, thereby giving users of large platforms the choice to protect themselves from the often privacy-invasive personalization of their feeds. However, forgoing all personalization will likely not be attractive to most users, and platforms should give users the choice to use third-party recommender systems that better mirror their privacy preferences.

Giving users more control over which accounts can interact with them, and in which ways, can also help protect children and teenagers against unwanted interactions. Strengthening users’ choice also includes prohibiting companies from implementing user interfaces that have the intent or substantial effect of impairing autonomy and choice. This so-called “deceptive design” can take many forms, from tricking people into giving consent to the collection of their personal data, to encouraging the use of certain features. The DSA takes steps to ban dark patterns, but European consumer protection law must make sure that this prohibition is strictly enforced and that no loopholes remain. 

A Privacy First Approach to Addressing Online Harms 

While rights-respecting content moderation and tools to strengthen parents’ and childrens’ self-determination online are part of the answer, we have long advocated for a privacy-focused approach to fighting online harms. 

We follow this approach for two reasons: On the one hand, privacy risks are complex and young people cannot be expected to predict risks that may materialize in the future. On the other hand, many of the ways in which children and teenangers can be harmed online are directly linked to the accumulation and exploitation of their personal data. 

Online services collect enormous amounts of personal data and personalize or target their services – displaying ads or recommender systems – based on that data. While the systems that target and display ads and curate online content are distinct, both are based on the surveillance and profiling of users. In addition to allowing users to choose a recommender system, settings for all users should by default turn off recommender systems based on behavioral data. To protect all users’ privacy and data protection rights, platforms should have to ask for users’ informed, specific, voluntary, opt-in consent before collecting their data to personalize recommender systems. Privacy settings should be easily accessible and allow users to enable additional protections. 

Data collection in the context of online ads is even more opaque. Due to the large number of ad tech actors and data brokers involved, it is practically impossible for users to give informed consent for the processing of their personal data. This data is used by ad tech companies and data brokers to profile users to draw inferences about what they like, what kind of person they are (including demographics like age and gender), and what they might be interested in buying, seeing, or engaging with. This information is then used by ad tech companies to target advertisements, including for children. Beyond undermining children’s privacy and autonomy, the online behavioral ad system teaches users from a young age that data collection, tracking, and profiling are evils that come with using the web, thereby normalizing being tracked, profiled, and surveilled. 

This is why we have long advocated for a ban of online behavioral advertising. Banning behavioral ads would remove a major incentive for companies to collect as much data as they do. The DSA already bans targeting minors with behavioral ads, but this protection should be extended to everyone. Banning behavioral advertising will be the most effective path to disincentivize the collection and processing of personal data and end the surveillance of all users, including children, online. 

Similarly, pay-for-privacy schemes should be banned, and we welcome the recent decision by the European Commission to fine Meta for breaching the Digital Markets Act by offering its users a binary choice between paying for privacy or having their personal data used for ads targeting. Especially in the face of recent political pressure from the Trump administration to not enforce European tech laws, we applaud the European Commission for taking a clear stance and confirming that the protection of privacy online should never be a luxury or privilege. And especially vulnerable users like children should not be confronted with the choice between paying extra (something that many children will not be able to do) or being surveilled.

Congress Passes TAKE IT DOWN Act Despite Major Flaws

Par : Jason Kelley
28 avril 2025 à 19:26

Today the U.S. House of Representatives passed the TAKE IT DOWN Act, giving the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like. President Trump himself has said that he would use the law to censor his critics. The bill passed the Senate in February, and it now heads to the president's desk. 

The takedown provision in TAKE IT DOWN applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.

Congress is using the wrong approach to helping people whose intimate images are shared without their consent. TAKE IT DOWN pressures platforms to actively monitor speech, including speech that is presently encrypted. The law thus presents a huge threat to security and privacy online. While the bill is meant to address a serious problem, good intentions alone are not enough to make good policy. Lawmakers should be strengthening and enforcing existing legal protections for victims, rather than inventing new takedown regimes that are ripe for abuse. 

Texas’s War on Abortion Is Now a War on Free Speech

28 avril 2025 à 13:10

UPDATE May 8, 2025: A committee substitute of SB 2880 passed the Texas Senate on April 30, 2025, with the provisions related to interactive computer services and providing information on how to obtain an abortion-inducing drug removed. These provisions, however, currently remain in the House version of the bill, HB 5510.

Once again, the Texas legislature is coming after the most common method of safe and effective abortion today—medication abortion.

Senate Bill (S.B.) 2880* seeks to prevent the sale and distribution of abortion pills—but it doesn’t stop there. By restricting access to certain information online, the bill tries to keep people from learning about abortion drugs, or even knowing that they exist.

If passed, S.B. 2880 would make it illegal to “provide information” on how to obtain an abortion-inducing drug. If you exchange e-mails or have an online chat about seeking an abortion, you could violate the bill. If you create a website that shares information about legal abortion services in other states, you could violate the bill. Even your social media posts could put you at risk.

On top of going after online speakers who create and post content themselves, the bill also targets social media platforms, websites, email services, messaging apps, and any other “interactive computer service” simply for hosting or making that content available.

In other words, Texas legislators not only want to make sure no one can start a discussion on these topics, they also want to make sure no one can find one. The goal is to wipe this information from the internet altogether. That creates glaring free-speech issues with this bill and, if passed, the consequences would be dire.

The bill is carefully designed to scare people into silence.

First, S.B. 2880 empowers average citizens to sue anyone that violates the law. An “interactive computer service” can also be sued if it “allows residents of [Texas] to access information or material that aids, abets, assists or facilitates efforts to obtain elective abortions or abortion-inducing drugs.”

So, similar to Texas Senate Bill 8, the bill encourages anyone to file lawsuits against those who merely speak about or provide access to certain information. This is intended to, and will, chill free speech. The looming threat of litigation can be used to silence those who seek to give women truthful information about their reproductive options—potentially putting their health or lives in danger.

Second, S.B. 2880 encourages online intermediaries to take down abortion-related content. For example, if sued under the law, a defendant platform can escape liability by showing that, once discovered, they promptly “block[ed] access to any information . . . that assists or facilitates efforts to obtain elective abortions or abortion-inducing drugs.”

The bill also grants them “absolute and nonwaivable immunity” against claims arising from takedowns, denials of service, or any other “action taken to restrict access to or availability of [this] information.” In other words, if someone sues a social media platform or internet service provider for censorship, they are well-shielded from facing consequences. This further tips the scales in favor of blocking more websites, posts, and users.

In three different provisions of the 43-page bill, the drafters go out of their way to assure us that S.B. 2880 should not be construed to prohibit speech or conduct that’s protected by the First Amendment. But simply stating that the law does not restrict free speech does not make it so. The obvious goal of this bill is to restrict access to information about abortion medications online. It’s hard to imagine what claims could be brought under such a bill that don’t implicate our free speech rights.

The bill’s imposition of civil and criminal liability also conflicts with a federal law that protects online intermediaries’ ability to host user-generated speech, 47 U.S.C. § 230 (“Section 230”), including speech about abortion medication. Although the bill explicitly states that it does not conflict with Section 230, that assurance remains meaningful only so long as Section 230’s protections remain robust. But Congress is currently considering revisions—or even a full repeal of Section 230. Any weakening of Section 230 will create more space for those empowered by this bill to use the courts to pressure intermediaries/platforms to remove information about abortion medication.

Whenever the government tries to restrict our ability to access information, our First Amendment rights are threatened. This is exactly what Texas lawmakers are trying to do with S.B. 2880. Anyone who cares about free speech—regardless of how they feel about reproductive care—should urge lawmakers to oppose this bill and others like it.

*H.B. 5510 is the identical House version of S.B. 2880.

Leaders Must Do All They Can to Bring Alaa Home

25 avril 2025 à 04:24

It has now been nearly two months since UK Prime Minister Starmer spoke with Egyptian President Abdel Fattah el-Sisi, yet there has been no tangible progress in the case of Alaa Abd El Fattah, the British-Egyptian writer, activist, and technologist who remains imprisoned in Egypt.

In yet another blow to his family and supporters, who have been tirelessly advocating for his release, we’ve now learned that Alaa has fallen ill while on a sustained hunger strike protesting his incarceration. Alaa’s sentence was due to end last September.

Alaa’s mother, Laila Soueif, initiated a hunger strike beginning on his intended release date to amplify demands for her son’s release. Soueif, too, is facing deteriorating health, having to shift from a full hunger strike to a partial strike allowing for 300 liquid calories a day after being hospitalized in London, and following Starmer’s subsequent call with el-Sisi. Risking serious complications, today  marks the 208th day of her hunger strike in protest at her son’s continued imprisonment in Egypt. Calling for her son’s freedom, Soueif has warned that she will resume a full hunger strike if progress is not made soon on Alaa’s case.

As of April 24, Alaa is on Day 55 of a hunger strike that he began on 1 March. He is surviving on a strict ration of herbal tea, black coffee, and rehydration salts, and is now being treated in Wadi El-Natrun prison for severe stomach pains. In a letter to his family on April 20, Alaa described worsening conditions and side effects from medications administered by prison doctors: “the truth is the inflammation is getting worse … all these medicines are making me dizzy and yesterday my vision was hazy and I saw distant objects double.”

Responding to Alaa’ illness in prison, Alaa’s sister Sanaa Seif stated in a press release: “We are all so exhausted. My mum and my brother are literally putting their bodies on the line, just to give Alaa the freedom he deserves. Their health is so precarious, I’m always afraid that we are on the verge of a tragedy. We need Keir Starmer to do all he can to bring Alaa home to us.”

Alaa’s case has galvanized support from across the UK political spectrum, with more than 50 parliamentarians urging immediate action. Prime Minister Starmer has publicly committed to pressing for Alaa’s release, but these words must now be matched by action. As Alaa’s health deteriorates, and his family’s ordeal drags on, the need for decisive intervention has never been more urgent. The time to secure Alaa’s freedom—and prevent further tragedy—is now.

EFF continues to work with the campaign to free Alaa: his case is a critical test of digital rights, free expression, and international justice. 

Digital Identities and the Future of Age Verification in Europe

This is the first part of a three-part series about age verification in the European Union. In this blog post, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

As governments across the world pass laws to “keep children safe online,” more times than not, notions of safety rest on the ability of platforms, websites, and online entities being able to discern users by age. This legislative trend has also arrived in the European Union, where online child safety is becoming one of the issues that will define European tech policy for years to come. 

Like many policymakers elsewhere, European regulators are increasingly focused on a range of online harms they believe are associated with online platforms, such as compulsive design and the effects of social media consumption on children’s and teenagers’ mental health. Many of these concerns lack robust scientific evidence; studies have drawn a far more complex and nuanced picture about how social media and young people’s mental health interact. Still, calls for mandatory age verification have become as ubiquitous as they have become trendy. Heads of state in France and Denmark have recently called for banning under 15 year olds from social media Europe-wide, while Germany, Greece and Spain are working on their own age verification pilots. 

EFF has been fighting age verification mandates because they undermine the free expression rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security. We do not think that requiring service providers to verify users’ age is the right approach to protecting people online. 

Policy makers frame age verification as a necessary tool to prevent children from accessing content deemed unsuitable, to be able to design online services appropriate for children and teenagers, and to enable minors to participate online in age appropriate ways. Rarely is it acknowledged that age verification undermines the privacy and free expression rights of all users, routinely blocks access to resources that can be life saving, and undermines the development of media literacy. Rare, too, are critical conversations about the specific rights of young users: The UN Convention on the Rights of the Child clearly expresses that minors have rights to freedom of expression and access to information online, as well as the right to privacy. These rights are reflected in the European Charter of Fundamental Rights, which establishes the rights to privacy, data protection and free expression for all European citizens, including children. These rights would be steamrolled by age verification requirements. And rarer still are policy discussions of ways to improve these rights for young people.

Implicitly Mandatory Age Verification

Currently, there is no legal obligation to verify users’ age in the EU. However, different European legal acts that recently entered into force or are being discussed implicitly require providers to know users’ ages or suggest age assessments as a measure to mitigate risks for minors online. At EFF, we consider these proposals akin to mandates because there is often no alternative method to comply except to introduce age verification. 

Under the General Data Protection Regulation (GDPR), in practice, providers will often need to implement some form of age verification or age assurance (depending on the type of service and risks involved): Article 8 stipulates that the processing of personal data of children under the age of 16 requires parental consent. Thus, service providers are implicitly required to make reasonable efforts to assess users’ ages – although the law doesn’t specify what “reasonable efforts” entails. 

Another example is the child safety article (Article 28) of the Digital Services Act (DSA), the EU’s recently adopted new legal framework for online platforms. It requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. The article also prohibits targeting minors with personalized ads. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy, and taking measures to protect minors specifically, but it's presently unclear which measures providers must take to comply with these obligations. Recital 71 of the DSA states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. The European Commission is currently working on guidelines for the implementation of Article 28 and may come up with criteria for what they believe would be effective and privacy-preserving age verification. 

The DSA does explicitly name age verification as one measure the largest platforms – so called Very Large Online Platforms (VLOPs) that have more than 45 million monthly users in the EU – can choose to mitigate systemic risks related to their services. Those risks, while poorly defined, include negative impacts on the protection of minors and users’ physical and mental wellbeing. While this is also not an explicit obligation, the European Commission seems to expect adult content platforms to adopt age verification to comply with their risk mitigation obligations under the DSA. 

Adding another layer of complexity, age verification is a major element of the dangerous European Commission proposal to fight child sexual abuse material through mandatory scanning of private and encrypted communication. While the negotiations of this bill have largely stalled, the Commission’s original proposal puts an obligation on app stores and interpersonal communication services (think messaging apps or email) to implement age verification. While the European Parliament has followed the advice of civil society organizations and experts and has rejected the notion of mandatory age verification in its position on the proposal, the Council, the institution representing member states, is still considering mandatory age verification. 

Digital Identities and Age Verification 

Leaving aside the various policy work streams that implicitly or explicitly consider whether age verification should be introduced across the EU, the European Commission seems to have decided on the how: Digital identities.

In 2024, the EU adopted the updated version of the so-called eIDAS Regulation, which sets out a legal framework for digital identities and authentication in Europe. Member States are now working on national identity wallets, with the goal of rolling out digital identities across the EU by 2026.

Despite the imminent roll out of digital identities in 2026, which could facilitate age verification, the European Commission clearly felt pressure to act sooner than that. That’s why, in the fall of 2024, the Commission published a tender for a “mini-ID wallet”, offering four million euros in exchange for the development of an “age verification solution” by the second quarter of 2025 to appease Member States anxious to introduce age verification today. 

Favoring digital identities for age verification follows an overarching trend to push obligations to conduct age assessments continuously further down in the stack – from apps to app stores to operating service providers. Dealing with age verification at the app store, device, or operating system level is also a demand long made by providers of social media and dating apps seeking to avoid liability for insufficient age verification. Embedding age verification at the device level will make it more ubiquitous and harder to avoid. This is a dangerous direction; digital identity systems raise serious concerns about privacy and equity.

This approach will likely also lead to mission creep: While the Commission limits its tender to age verification for 18+ services (specifically adult content websites), it is made abundantly clear that once available, age verification could be extended to “allow age-appropriate access whatever the age-restriction (13 or over, 16 or over, 65 or over, under 18 etc)”. Extending age verification is even more likely when digital identity wallets don’t come in the shape of an app, but are baked into operating systems. 

In the next post of this series, we will be taking a closer look at the age verification app the European Commission has been working on.

Cybersecurity Community Must Not Remain Silent On Executive Order Attacking Former CISA Director

Cybersecurity professionals and the infosec community have essential roles to play in protecting our democracy, securing our elections, and building, testing, and safeguarding government infrastructure. It is critically important for us to speak up to ensure that essential work continues and that those engaged in these good faith efforts are not maligned by an administration that has tried to make examples of its enemies in many other fields. 

President Trump has targeted the former Director of the government’s Cybersecurity and Infrastructure Security Agency (CISA), Chris Krebs, with an executive order cancelling the security clearances of employees at SentinelOne, where Krebs is now the Chief Intelligence and Public Policy Officer, and launching a probe of his work in the White House. President Trump had previously fired Krebs in 2020 when, in his capacity as CISA Director, Krebs released a statement calling that election, which Trump lost, "the most secure in American history.” 

The executive order directed a review to “identify any instances where Krebs’ or CISA’s conduct appears to be contrary to the administration’s commitment to free speech and ending federal censorship, including whether Krebs’ conduct was contrary to suitability standards for federal employees or involved the unauthorized dissemination of classified information.” Krebs was, in fact, fired for his public stance. 

We’ve seen this playbook before: In March, Trump targeted law firm Perkins Coie for its past work on voting rights lawsuits and its representation of the President’s prior political opponents in a shocking, vindictive, and unconstitutional executive order. After that order, many in the legal profession, including EFF, pushed back, issuing public statements and filing friend of the court briefs in support of Perkins Coie, and other law firms challenging executive orders against them. This public support was especially important in light of the fact that a few large firms capitulated to Trump rather than fight the orders against them.

It is critical that the cybersecurity community now join together to denounce this chilling attack on free speech and rally behind Krebs and SentinelOne rather than cowering because they fear they will be next

The White House must not be given free reign to turn cybersecurity professionals into political scapegoats. EFF regularly defends the infosec community, protecting researchers through education, legal defense, amicus briefs, and involvement in the community with the goal of promoting innovation and safeguarding their rights, and we call on its ranks to join us in defending Chris Krebs and SentinelOne. An independent infosec community is fundamental to protecting our democracy, and to the profession itself.

Congress Takes Another Step Toward Enabling Broad Internet Censorship

10 avril 2025 à 10:54

The House Energy and Commerce Committee on Tuesday advanced the TAKE IT DOWN Act (S. 146) , a bill that seeks to speed up the removal of certain kinds of troubling online content. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. 

As we’ve written before, while protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. 

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

This bill mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without meaningfully addressing the problem it claims to solve. The “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. 

The legislation’s 48-hour takedown deadline means that online service providers, particularly smaller ones, will have to comply quickly to avoid legal risks. That time crunch will make it impossible for services to verify the content is in fact NCII. Instead, services will rely on automated filters—infamously blunt tools that frequently flag legal content, from fair-use commentary to news reporting.

Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. 

While several committee Members offered amendments to clarify these problematic provisions in the bill during committee consideration, committee leadership rejected all attempts to amend the bill. 

The TAKE IT DOWN Act is now expected to receive a floor vote in the coming weeks before heading to President Trump’s desk for his signature. Both the President himself and First Lady Melania Trump have been vocal supporters of this bill, and they have been urging Congress to quickly pass it. Trump has shown just how the bill can be abused, saying earlier this year that he would personally use the takedown provisions to censor speech critical of the president.

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

Fast tracking a censorship bill is always troubling. TAKE IT DOWN is the wrong approach to helping people whose intimate images are shared without their consent. We can help victims of online harassment without embracing a new regime of online censorship.

Congress should strengthen and enforce existing legal protections for victims, rather than opting for a broad takedown regime that is ripe for abuse. 

Tell your Member of Congress to oppose censorship and to oppose S. 146.

À partir d’avant-hierFlux principal

EFF Joins Amicus Briefs Supporting Two More Law Firms Against Unconstitutional Executive Orders

Par : David Greene
14 avril 2025 à 12:28

Update 4/25/25: EFF joined the ACLU and other legal advocacy organizations today in filing an additional amicus brief in support of the law firm Susman Godfrey LLP, which also has been targeted by President Donald Trump.

Update 4/11/25: EFF joined the ACLU and other legal advocacy organizations today in filing two additional amicus briefs in support of the law firms Jenner & Block and WilmerHale, which have also been targeted by President Donald Trump.

Original post published 4/3/25: EFF has joined the American Civil Liberties Union and other legal advocacy organizations across the ideological spectrum in filing an amicus brief asking a federal judge to strike down President Donald Trump’s executive order targeting law firm Perkins Coie for its past work on voting rights lawsuits and its representation of the President’s prior political opponents. 

As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, EFF unequivocally supports Perkins Coie’s challenge to this shocking, vindictive, and unconstitutional executive order. In punishing the law firm for its zealous advocacy on behalf of its clients, the March 6 order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie and other targeted law firms that have chosen to do so (and their legal representatives) for fighting back.  

“If allowed to stand, these pressure tactics will have broad and lasting impacts on Americans' ability to retain legal counsel in important matters, to arrange their business and personal affairs as they like, and to speak their minds,” our brief says. 

Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803).   

The executive order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.   

This is a fundamental abuse of executive power.   

The constitutional problems are legion, but here are a few:   

  • The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545.  
  • The executive order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Lozman v. City of Riviera Beach, 585 U.S. 87, 90 (2018). 
  • The executive order violates fundamental precepts of separation of powers and the Fifth and Sixth Amendment rights of litigants to select the counsel of their choice. See United States v. Gonzalez-Lopez, 548 U.S. 140, 147–48 (2006).  

An independent legal profession is a fundamental component of democracy and the rule of law. As a nonprofit legal organization that frequently sues the federal government, we well understand the value of this bedrock principle and how it – and First Amendment rights more broadly – are threatened by President Trump’s executive orders targeting Perkins Coie and other law firms. It is especially important that the whole legal profession speak out against the executive orders in light of the capitulation by a few large law firms. 

The order must be swiftly nullified by the U.S. District Court for the District of Columbia, and must be uniformly vilified by the entire legal profession. 

The ACLU’s press releases with quotes from fellow amici can be found here and here.

EFF Joins 7amleh Campaign to #ReconnectGaza

In times of conflict, the internet becomes more than just a tool—it is a lifeline, connecting those caught in chaos with the outside world. It carries voices that might otherwise be silenced, bearing witness to suffering and survival. Without internet access, communities become isolated, and the flow of critical information is disrupted, making an already dire situation even worse.

At this years RightsCon conference hosted in Taiwan, Palestinian non-profit organization 7amleh, in collaboration with the Palestinian Digital Rights Coalition and supported by dozens of international organizations including EFF, launched #ReconnectGaza, a global campaign to rebuild Gaza’s telecommunications network and safeguard the right to communication as a fundamental human right. 

The campaign comes on the back of more than 17 months of internet blackouts and destruction to Gaza’s telecommunications infrastructure by  the Israeli authorities.Estimates indicate that 75% of Gaza’s telecommunications infrastructure has been damaged, with 50% completely destroyed. This loss of connectivity has crippled essential services— preventing healthcare coordination, disrupting education, and isolating Palestinians from the digital economy. In response, there is an urgent and immediate need  to deploy emergency solutions, such as eSIM cards, satellite internet access, and mobile communications hubs.

At the same time, there is an opportunity to rebuild towards a just and permanent solution with modern technologies that would enable reliable, high-speed connectivity that supports education, healthcare, and economic growth. The campaign calls for this as a paramount component to reconnecting Gaza, whilst also ensuring the safety and protection of telecommunications workers on the ground, who risk their lives to repair and maintain critical infrastructure. 

Further, beyond responding to these immediate needs, 7amleh and the #ReconnectGaza campaign demands the establishment of an independent Palestinian ICT sector, free from external control, as a cornerstone of Gaza’s reconstruction and Palestine's digital sovereignty. Palestinians have been subject to Israel internet controls since the Oslo Accords, which settled that Palestine should have its own telephone, radio, and TV networks, but handed over details to a joint technical committee. Ending the deliberate isolation of the Palestinian people is critical to protecting fundamental human rights.

This is not the first time internet shutdowns have been weaponized as a tool for oppression. In 2012, Palestinians in Gaza were subject to frequent power outages and were forced to rely on generators and insecure dial-up connections for connectivity. More recently since October 7, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world.

Access to internet infrastructure is essential—it enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. It is imperative that people’s access to the internet remains protected.

The restoration of telecommunications in Gaza is deemed an urgent humanitarian need. Global stakeholders, including UN agencies, governments, and telecommunications companies, must act swiftly to ensure the restoration and modernization of Gaza’s telecommunications.

EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community

13 mars 2025 à 13:30

In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:

People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. 

The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.

In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads. 

The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania. 

Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people!

If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.

EFF Stands with Perkins Coie and the Rule of Law

Par : David Greene
12 mars 2025 à 13:50

As a legal organization that has fought in court to defend the rights of technology users for almost 35 years, including numerous legal challenges to federal government overreach, Electronic Frontier Foundation unequivocally supports Perkins Coie’s challenge to the Trump administration’s shocking, vindictive, and unconstitutional Executive Order. In punishing the law firm for its zealous advocacy on behalf of its clients, the order offends the First Amendment, the rule of law, and the legal profession broadly in numerous ways. We commend Perkins Coie (and its legal representatives) for fighting back. 

Lawsuits against the federal government are a vital component of the system of checks and balances that undergirds American democracy. They reflect a confidence in both the judiciary to decide such matters fairly and justly, and the executive to abide by the court’s determination. They are a backstop against autocracy and a sustaining feature of American jurisprudence since Marbury v. Madison, 5 U.S. 137 (1803) 

The Executive Order, if enforced, would upend that system and set an appalling precedent: Law firms that represent clients adverse to a given administration can and will be punished for doing their jobs.  

This is a fundamental abuse of executive power. 

The constitutional problems are legion, but here are a few:  

  • The First Amendment bars the government from “distorting the legal system by altering the traditional role of attorneys” by controlling what legal arguments lawyers can make. See Legal Services Corp. v. Velasquez, 531 U.S. 533, 544 (2001). “An informed independent judiciary presumes an informed, independent bar.” Id. at 545. 
  • The Executive Order is also unconstitutional retaliation for Perkins Coie’s engaging in constitutionally protected speech during the course of representing its clients. See Nieves v. Bartlett, 587 U.S. 391, 398 (2019) 
  • And the Executive Order functions as an illegal loyalty oath for the entire legal profession, conditioning access to federal courthouses or client relationships with government contractors on fealty to the executive branch, including forswearing protected speech in opposition to it. That condition is blatantly unlawful:  The government cannot require that those it works with or hires embrace certain political beliefs or promise that they have “not engaged, or will not engage, in protected speech activities such as … criticizing institutions of government.”  See Cole v. Richardson, 405 U.S. 676, 680 (1972). 

Civil liberties advocates such as EFF rely on the rule of law and access to the courts to vindicate their clients’, and the public’s, fundamental rights. From this vantage point, we can see that this Executive Order is nothing less than an attack on the foundational principles of American democracy.  

The Executive Order must be swiftly nullified by the court and uniformly vilified by the entire legal profession.

Click here for the number to listen in on a hearing on a temporary restraining order, scheduled for 2pmET/11amPT Wednesday, March 12.

RightsCon Community Calls for Urgent Release of Alaa Abd El-Fattah

Last month saw digital rights organizations and social justice groups head to Taiwan for this year's RightsCon conference on human rights in the digital age. During the conference, one prominent message was spoken loud and clear: Alaa Abd El-Fattah must be immediately released from illegal detention in Egypt.

"As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison."

During the RightsCon opening ceremony, Access Now’s Executive Director, Alejandro Mayoral Baños, affirmed the urgency of Alaa’s situation in detention and called for Alaa’s freedom. The RightsCon community was also addressed by Alaa’s mother, mathematician Laila Soueif, who has been on hunger strike in London for 158 days. In a video highlighting Alaa’s work with digital rights and his role in this community, she stated: “As Alaa’s mother, I thank you for your solidarity and ask you to not to give up until Alaa is out of prison.” Laila was admitted to hospital the next day with dangerously low blood sugar, blood pressure and sodium levels.

a group of people at RightsCon in Taipei holding signs for Alaa Abd El Fattah to be freed

RightsCon participants gather in solidarity with the #FreeAlaa campaign

The calls to #FreeAlaa and save Laila were again reaffirmed during the closing ceremony in a keynote by Sara Alsherif, Migrant Digital Justice Programme Manager at Open Rights Group and close friend of Alaa. Referencing Alaa’s early work as a digital activist, Alsherif said: “He understood that the fight for digital rights is at the core of the struggle for human rights and democracy.” She closed by reminding the hundreds-strong audience that “Alaa could be any one of us … Please do for him what you would want us to do for you if you were in his position.”

During RightsCon, with Laila still in hospital, calls for UK Prime Minister Starmer to get on the phone with Egyptian President Sisi reached a fever pitch, and on 28 February, one day after the closing ceremony, the UK government issued a press release affirming that Alaa’s case had been discussed, with Starmer pressing for Alaa’s freedom. 

Alaa should have been released on September 29, after serving a five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

Laila Soueif has been on hunger strike for more than five months while she and the rest of his family have worked in concert with various advocacy groups to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs and public figures. Laila still remains in hospital, but following Starmer’s call with Sisi agreed to take glucose, she stated that she is ready to end her hunger strike if progress is made. 

Laila Soueif and family meeting with UK Prime Minister Keir Starmer

As of March 6, Laila has moved to a partial hunger strike of 300 calories per day citing “hope that Alaa’s case might move.” However, the family has learned that Alaa himself began a hunger strike on March 1 in prison after hearing that his mother had been hospitalized. Laila has said that without fast movement on Alaa’s case she will return to a total hunger strike. Alaa’s sister Sanaa, who was previously jailed by the regime on bogus charges, visited Alaa on March 8.

If you’re based in the UK, we encourage you to write to your MP to urgently advocate for Alaa’s release (external link): https://freealaa.net/message-mp 

Supporters everywhere can share Alaa’s plight and Laila’s story on social media using the hashtags #FreeAlaa and #SaveLaila. Additionally, the campaign’s website (external link) offers additional actions, including purchasing Alaa’s book, and participating in a one-day solidarity hunger strike. You can also sign up for campaign updates by e-mail.

Every second counts, and time is running out. Keir Starmer and the British government must do everything it can to ensure Alaa’s immediate and unconditional release.

First Porn, Now Skin Cream? ‘Age Verification’ Bills Are Out of Control

I’m old enough to remember when age verification bills were pitched as a way to ‘save the kids from porn’ and shield them from other vague dangers lurking in the digital world (like…“the transgender”). We have long cautioned about the dangers of these laws, and pointed out why they are likely to fail. While they may be well-intentioned, the growing proliferation of age verification schemes poses serious risks to all of our digital freedoms.

Fast forward a few years, and these laws have morphed into something else entirely—unfortunately, something we expected. What started as a misguided attempt to protect minors from "explicit" content online has spiraled into a tangled mess of privacy-invasive surveillance schemes affecting skincare products, dating apps, and even diet pills, threatening everyone’s right to privacy.

Age Verification Laws: A Backdoor to Surveillance

Age verification laws do far more than ‘protect children online’—they require the  creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure.

This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills: 

  1. Skincare: AB-728 in California
    Age verification just hit the skincare aisle! California’s AB-728 mandates age verification for anyone purchasing skin care products or cosmetics that contain certain chemicals like Vitamin A or alpha hydroxy acids. On the surface, this may seem harmless—who doesn't want to ensure that minors are safe from harmful chemicals? But the real issue lies in the invasive surveillance it mandates. A person simply trying to buy face cream could be forced to submit sensitive personal data through “an age verification system,” creating a system of constant tracking and data collection for a product that should be innocuous.
  2. Dating Apps: A3323 in New York
    Match made in heaven? Not without your government-issued ID. New York’s A3323 bill mandates that online dating services verify users’ age, identity, and location before allowing access to their platforms. The bill's sweeping requirements introduce serious privacy concerns for all users. By forcing users to provide sensitive personal information—such as government-issued IDs and location data—the bill creates significant risks that this data could be misused, sold, or exposed through data breaches. 
  3. Dieting products: SB 5622 in Washington State
    Shed your privacy before you shed those pounds! Washington State’s SB 5622 takes aim at diet pills and dietary supplements by restricting their sale to anyone under 18. While the bill’s intention is to protect young people from potentially harmful dieting products, it misses the mark by overlooking the massive privacy risks associated with the age verification process for everyone else. To enforce this restriction, the bill requires intrusive personal data collection for purchasing diet pills in person or online, opening the door for sensitive information to be exploited.

The Problem with Age Verification: No Solution Is Safe

Let’s be clear: no method of age verification is both privacy-protective and entirely accurate. The methods also don’t fall on a neat spectrum of “more safe” to “less safe.” Instead, every form of age verification is better described as “dangerous in one way” or “dangerous in a different way.” These systems are inherently flawed, and none come without trade-offs. Additionally, they continue to burden adults who just want to browse the internet or buy everyday items without being subjected to mass data collection.

For example, when an age verification system requires users to submit government-issued identification or a scan of their face, it collects a staggering amount of sensitive, often immutable, biometric or other personal data—jeopardizing internet users’ privacy and security. Systems that rely on credit card information, phone numbers, or other third-party material  similarly amass troves of personal data. This data is just as susceptible to being misused as any other data, creating vulnerabilities for identity theft and data breaches. These issues are not just theoretical: age verification companies can be—and already have been—hacked. These are real, ongoing concerns for anyone who values their privacy. 

We must push back against age verification bills that create surveillance systems and undermine our civil liberties, and we must be clear-eyed about the dangers posed by these expanding age verification laws. While the intent to protect children makes sense, the unintended consequence is a massive erosion of privacy, security, and free expression online for everyone. Rather than focusing on restrictive age verification systems, lawmakers should explore better, less invasive ways to protect everyone online—methods that don’t place the entire burden of risk on individuals or threaten their fundamental rights. 

EFF will continue to advocate for digital privacy, security, and free expression. We urge legislators to prioritize solutions that uphold these essential values, ensuring that the internet remains a space for learning, connecting, and creating—without the constant threat of surveillance or censorship. Whether you’re buying a face cream, swiping on a dating app, or browsing for a bottle of diet pills, age verification laws undermine that vision, and we must do better.

Trump Calls On Congress To Pass The “Take It Down” Act—So He Can Censor His Critics

Par : Jason Kelley
5 mars 2025 à 15:41

We've opposed the Take It Down Act because it could be easily manipulated to take down lawful content that powerful people simply don't like. Last night, President Trump demonstrated he has a similar view on the bill. He wants to sign the bill into law, then use it to remove content about — him. And he won't be the only powerful person to do so. 

Here’s what Trump said to a joint session of Congress:    

The Senate just passed the Take It Down Act…. Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody. 

play
Privacy info. This embed will serve content from archive.org


Video courtesy C-SPAN.

The Take It Down Act is an overbroad, poorly drafted bill that would create a powerful system to pressure removal of internet posts, with essentially no safeguards. While the bill is meant to address a serious problem—the distribution of non-consensual intimate imagery (NCII)—the notice-and-takedown system it creates is an open invitation for powerful people to pressure websites into removing content they dislike. There are no penalties for applying very broad, or even farcical definitions of what constitutes NCII, and then demanding that it be removed.  

take action

TELL CONGRESS: "Take It Down" Has No real Safeguards  

This Bill Will Punish Critics, and The President Wants It Passed Right Now 

Congress should believe Trump when he says he would use the Take It Down Act simply because he's "treated badly," despite the fact that this is not the intention of the bill. There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the notice-and-takedown system to remove speech that criticizes them or that they disagree with.  

Trump has frequently targeted platforms carrying content and speakers of entirely legal speech that is critical of him, both as an elected official and as a private citizen.  He has filed frivolous lawsuits against media defendants which threaten to silence critics and draw scarce resources away from important reporting work.   

Now that Trump issued a call to action for the bill in his remarks, there is a possibility that House Republicans will fast track the bill into a spending package as soon as next week. Non-consensual intimate imagery is a serious problem that deserves serious consideration, not a hastily drafted, overbroad bill that sweeps in legal, protected speech. 

How The Take It Down Act Could Silence People 

A few weeks ago, a "deepfake" video of President Trump and Elon Musk was displayed across various monitors in the Housing and Urban Development office. The video was subsequently shared on various platforms. While most people wouldn't consider this video, which displayed faked footage of Trump kissing Elon Musk's feet, "nonconsensual intimate imagery," the takedown provision of the bill applies to an “identifiable individual” engaged in “sexually explicit conduct.” This definition leaves much room for interpretation, and nudity or graphic displays are not necessarily required.  

Moreover, there are no penalties whatsoever to dissuade a requester from simply insisting that content is NCII. Apps and websites only have 48 hours to remove content once they receive a request, which means they won’t be able to verify claims. Especially if the requester is an elected official with the power to start an investigation or prosecution, what website would stand up to such a request?  

The House Must Not Pass This Dangerous Bill 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Take It Down would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. It will threaten encrypted services, which may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.   

Protecting victims of NCII is a legitimate goal. But good intentions alone are not enough to make good policy. Tell your Member of Congress to oppose censorship and to oppose H.R.633. 

take action

Tell the house to stop "Take it down" 

Ninth Circuit Correctly Rules That Dating App Isn’t Liable for Matching Users

The U.S. Court of Appeals for the Ninth Circuit correctly held that Grindr, a popular dating app, can’t be held responsible for matching users and enabling them to exchange messages that led to real-world harm. EFF and the Woodhull Freedom Foundation filed an amicus brief in the Ninth Circuit in support of Grindr.

Grindr and other dating apps are possible thanks to strong Section 230 immunity. Without this protection, dating apps—and other platforms that host user-generated content—would have more incentive to censor people online. While real-world harms do happen when people connect online, these can be directly redressed by holding perpetrators who did the harm accountable.

The case, Doe v. Grindr, was brought by a plaintiff who was 15 years old when he signed up for Grindr but claimed to be over 18 years old to use the app. He was matched with other users and exchanged messages with them. This led to four in-person meetings that resulted in three out of four adult men being prosecuted and sentenced for rape.

The plaintiff brought various state law claims against Grindr centering around the idea that the app was defectively designed, enabling him to be matched with and to communicate with the adults. The plaintiff also brought a federal civil sex trafficking claim.

Grindr invoked Section 230, the federal statute that has ensured a free and open internet for nearly 30 years. Section 230(c)(1) specifically provides that online services are generally not responsible for “publishing” harmful user-generated content. Section 230 protects users’ online speech by protecting the intermediaries we all rely on to communicate via dating apps, social media, blogs, email, and other internet platforms.

The Ninth Circuit rightly affirmed the district court’s dismissal of all of the plaintiff’s claims. The court held that Section 230 bars nearly all of plaintiff’s claims (except the sex trafficking claim, which is exempted from Section 230). The court stated:

Each of Doe’s state law claims necessarily implicates Grindr’s role as a publisher of third-party content. The theory underpinning Doe’s claims for defective design, defective manufacturing, and negligence faults Grindr for facilitating communication among users for illegal activity….

The Ninth Circuit’s holding is important because many plaintiffs have tried in recent years to plead around Section 230 by framing their cases as seeking to hold internet platforms responsible for their own “defective designs,” rather than third-party content. Yet, a closer look at a plaintiff’s allegations often reveals that the plaintiff’s harm is indeed premised on third-party content—that’s true in this case, where the plaintiff exchanged messages with the adult men. As we argued in our brief:

Plaintiff’s claim here is based not on mere access to the app, but on the actions of a third party once John Doe logged in—messages exchanged between a third party and Doe, and ultimately, on unlawful acts occurring between them because of those communications.

Additionally, courts generally have concluded that an internet platform’s features that relate to how users can engage with the app and how third-party content is displayed and organized, are also “publishing” activities protected by Section 230.

As for the federal civil sex trafficking claim, the Ninth Circuit held that the plaintiff’s allegations failed to meet the statutory requirements. The court stated:

Doe must plausibly allege that Grindr ‘knowingly’ sex trafficked a person by a list of specified means. But the [complaint] merely shows that Grindr provided a platform that facilitated sharing of messages between users.

While the facts of this case are no doubt difficult, the Ninth Circuit reached the correct conclusion. Our modern communications are mediated by private companies, and any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate, as companies would be incentivized to engage in greater censorship of users to mitigate their legal exposure.

This does not leave victims without redress—they may seek to hold perpetrators responsible directly. Importantly in this case, three of the perpetrators were held criminally liable. And should facts show that an online service participated in criminal conduct, Section 230 would not block a federal prosecution. The court’s ruling demonstrates that Section 230 is working as Congress intended.

EFF to UK PM Starmer: Call Sisi to Free Alaa and Save Laila

25 février 2025 à 22:17

UK Prime Minister Keir Starmer made a public commitment on February 14 to Laila Soueif, the mother of Alaa Abd El Fattah, stating “I will do all that I can to secure the release of her son Alaa Abd el-Fattah and reunite him with his family.” While that commitment was welcomed by the family, it is imperative that it now be followed up with concrete action.

Laila has called on PM Starmer to speak directly to President Sisi of Egypt. Starmer has written to Sisi twice, in December and January, and his National Security Adviser, Jonathan Powell, discussed Alaa with Egyptian authorities in Cairo on January 2. UK authorities have not made public any further contact with Egypt since.

“all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.”

Laila, who has been on hunger strike since Alaa’s intended release date in September, was hospitalized on Monday night after her blood sugar dropped to worrying new levels. A letter published today from her NHS doctor states that there is now immediate risk to her life including further deterioration or death. Nevertheless, Laila remains steadfast in her commitment to refrain from eating until her son is freed.

In the words of Alaa’s sister Mona Seif: “all she wants is for [Alaa] to be free now that he served the full five year sentence, and after they stole 11 years of his and [his son] Khaled’s life.”

Alaa is a British citizen, and as such his government owes him more than mere lip service. The UK government can and must use every tactic available to them, including:

  • Changing travel advice on the Foreign Office’s website to reflect the fact that citizens arrested in Egypt cannot be guaranteed consular access
  • Convening a joint meeting of ministers and officials of the Foreign, Commonwealth and Development Office; Ministry of Defence; and Department of Business and Trade to discuss a unified strategy toward Alaa’s case
  • Summoning the Egyptian ambassador in London and restricting his access to Whitehall if Alaa is not released and returned to the UK
  • Announcing a moratorium on any governmental assistance or promotion of new Foreign Direct Investments into Egypt, as called for by 15 NGOs in November.

EFF once again calls on Prime Minister Starmer to pick up the phone and call Egyptian President Sisi to free Alaa and save Laila—before it’s too late.

The Senate Passed The TAKE IT DOWN Act, Threatening Free Expression and Due Process

25 février 2025 à 16:10

Earlier this month, the Senate passed the TAKE IT DOWN Act (S. 146), by a voice vote. The bill is meant to speed up the removal of non-consensual intimate imagery, or NCII, including videos that imitate real people, a technology sometimes called “deepfakes.” 

Protecting victims of these heinous privacy invasions is a legitimate goal. But good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve. 

This misguided bill can still be stopped in the House of Representatives. Help us speak out against it now. 

take action

"Take It Down" Has No real Safeguards  

Before this vote, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate, asking them to change this legislation to protect legitimate speech that is not NCII. Changes are also needed to protect users who rely on encrypted services.

The letter explains that the bill’s “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims

This would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Tell your Member of Congress to oppose censorship and to oppose S. 146. 

take action

Tell the house to stop "Take it down" 



Further reading:

The Judicial Conference Should Continue to Liberally Allow Amicus Briefs, a Critical Advocacy Tool

Par : Sophia Cope
21 février 2025 à 19:56

EFF does a lot of things, including impact litigation, legislative lobbying, and technology development, all to fight for your civil liberties in the digital age. With litigation, we directly represent clients and also file “amicus” briefs in court cases.

An amicus brief, also called a “friend-of-the-court” brief, is when we don’t represent one of the parties on either side of the “v”—instead, we provide the court with a helpful outside perspective on the case, either on behalf of ourselves or other groups, that can help the court make its decision.

Amicus briefs are a core part of EFF’s legal work. Over the years, courts at all levels have extensively engaged with and cited our amicus briefs, showing that they value our thoughtful legal analysis, technical expertise, and public interest mission.

Unfortunately, the Judicial Conference—the body that oversees the federal court system—has proposed changes to the rule governing amicus briefs (Federal Rule of Appellate Procedure 29) that would make it harder to file such briefs in the circuit courts.

EFF filed comments with the Judicial Conference sharing our thoughts on the proposed rule changes (a total of 407 comments were filed). Two proposed changes are particularly concerning.

First, amicus briefs would be “disfavored” if they address issues “already mentioned” by the parties. This language is extremely broad and may significantly reduce the amount and types of amicus briefs that are filed in the circuit courts. As we said in our comments:

We often file amicus briefs that expand upon issues only briefly addressed by the parties, either because of lack of space given other issues that party counsel must also address on appeal, or a lack of deep expertise by party counsel on a specific issue that EFF specializes in. We see this often in criminal appeals when we file in support of the defendant. We also file briefs that address issues mentioned by the parties but additionally explain how the relevant technology works or how the outcome of the case will impact certain other constituencies.

We then shared examples of EFF amicus briefs that may have been disfavored if the “already mentioned” standard had been in effect, even though our briefs provided help to the courts. Just two examples are:

  • In United States v. Cano, we filed an amicus brief that addressed the core issue of the case—whether the border search exception to the Fourth Amendment’s warrant requirement applies to cell phones. We provided a detailed explanation of the privacy interests in digital devices, and a thorough Fourth Amendment analysis regarding why a warrant should be required to search digital devices at the border. The Ninth Circuit extensively engaged with our brief to vacate the defendant’s conviction.
  • In NetChoice, LLC v. Attorney General of Florida, a First Amendment case about social media content moderation (later considered by the Supreme Court), we filed an amicus brief that elaborated on points only briefly made by the parties about the prevalence of specialized social media services reflecting a wide variety of subject matter focuses and political viewpoints. Several of the examples we provided were used by the 11th Circuit in its opinion.

Second, the proposed rules would require an amicus organization (or person) to file a motion with the court and get formal approval before filing an amicus brief. This would replace the current rule, which also allows an amicus brief to be filed if both parties in the case consent (which is commonly what happens).

As we stated in our comments: “Eliminating the consent provision will dramatically increase motion practice for circuit courts, putting administrative burdens on the courts as well as amicus brief filers.” We also argued that this proposed change “is not in the interests of justice.” We wrote:

Having to write and file a separate motion may disincentivize certain parties from filing amicus briefs, especially people or organizations with limited resources … The circuits should … facilitate the participation by diverse organizations at all stages of the appellate process—where appeals often do not just deal with discrete disputes between parties, but instead deal with matters of constitutional and statutory interpretation that will impact the rights of Americans for years to come.

Amicus briefs are a crucial part of EFF’s work in defending your digital rights, and our briefs provide valuable arguments and expertise that help the courts make informed decisions. That’s why we are calling on the Judicial Conference to reject these changes and preserve our ability to file amicus briefs in the federal appellate courts that make a difference.

Your support is essential in ensuring that we can continue to fight for your digital rights—in and out of court.

DONATE TO EFF

EFF at RightsCon 2025

21 février 2025 à 12:31

EFF is delighted to be attending RightsCon again—this year hosted in Taipei, Taiwan between 24-27 February.

RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

Many EFFers are heading to Taipei and will be actively participating in this year's event. Several members will be leading sessions, speaking on panels, and be available for networking.

Our delegation includes:

  • Alexis Hancock, Director of Engineering, Certbot
  • Babette Ngene, Public Interest Technology Director
  • Christoph Schmon, International Policy Director
  • Cindy Cohn, Executive Director
  • Daly Barnett, Senior Staff Technologist
  • David Greene, Senior Staff Attorney and Civil Liberties Director
  • Jillian York, Director of International Freedom of Expression
  • Karen Gullo, Senior Writer for Free Speech and Privacy
  • Paige Collings, Senior Speech and Privacy Activist
  • Svea Windwehr, Assistant Director of EU Policy
  • Veridiana Alimonti, Associate Director For Latin American Policy

We hope you’ll have the opportunity to connect with us during the conference, especially at the following sessions: 

Day 0 (Monday 24 February)

Mutual Support: Amplifying the Voices of Digital Rights Defenders in Taiwan and East Asia

09:00 - 12:30, Room 101C
Alexis Hancock, Director of Engineering, Certbot
Host institutions: Open Culture Foundation, Odditysay Labs, Citizen Congress Watch and FLAME

This event aims to present Taiwan and East Asia’s digital rights landscape, highlighting current challenges faced by digital rights defenders and fostering resonance with participants' experiences. Join to engage in insightful discussions, learn from Taiwan’s tech community and civil society, and contribute to the global dialogue on these pressing issues. The form to register is here

Platform accountability in crisis? Global perspective on platform accountability frameworks

09:00 - 13:00, Room 202A
Christoph Schmon, International Policy Director; Babette Ngene, Public Interest Technology Director
Host institutions: Electronic Frontier Foundation (EFF), Access Now

This high level panel will reflect on alarming developments in platforms' content policies and their enforcement, and discuss whether existing frameworks offer meaningful tools to counter the current platform accountability crisis. The starting point for the discussion will be Access Now's recently launched report Platform accountability: a rule-of-law checklist for policymakers. The panel will be followed by a workshop, dedicated to the “Draft Viennese Principles for Embedding Global Considerations into Human-Rights-Centred DSA enforcement”. Facilitated by the DSA Human Rights Alliance, the workshop will provide a safe space for civil society organisations to strategize and discuss necessary elements of a human rights based approach to platform governance.

Day 1 (Tuesday 25 February) 

Criminalization of Tor in Ola Bini’s case? Lessons for digital experts in the Global South

09:00 - 10:00 (online)
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Access Now, Centro de Autonomía Digital (CAD), Observation Mission of the Ola Bini Case, Tor Project

This session will analyze how the use of Tor is criminalized in Ola Bini´s case and its implications for digital experts in other contexts of criminalization in the Global South, especially when they defend human rights online. Participants will work through various exercises to: 1- Analyze, from a technical perspective, the judicial criminalization of Tor in Ola Bini´s case, and 2- Collectively analyze how its criminalization can affect (judicially) the work of digital experts from the Global South and discuss possible support alternatives.

The counter-surveillance supply chain

11:30am - 12:30, Room 201F
Babette Ngene, Public Interest Technology Director
Host institution: Meta

The fight against surveillance and other malicious cyber adversaries is a whole-of-society problem, requiring international norms and policies, in-depth research, platform-level defenses, investigation, and detection. This dialogue focuses on the critical first link in this counter-surveillance supply chain; the on the ground organizations around the world who are the first contact for local activists and organizations dealing with targeted malware, and will include an open discussion on how to improve the global response to surveillance and surveillance-for-hire actors through a lens of local contextual knowledge and information sharing.

Day 3 (Wednesday 26 February) 

Derecho a no ser objeto de decisiones automatizadas: desafíos y regulaciones en el sector judicial

16:30 - 17:30, Room 101C
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Hiperderecho, Red en Defensa de los Derechos Digitales, Instituto Panamericano de Derecho y Tecnología

A través de este panel se analizarán casos específicos de México, Perú y Colombia para comprender las implicaciones éticas y jurídicas del uso de la inteligencia artificial en la redacción y motivación de sentencias judiciales. Con este diálogo se busca abordar el derecho a no ser objeto de decisiones automatizadas y las implicaciones éticas y jurídicas sobre la automatización de sentencias judiciales. Algunas herramientas pueden reproducir o amplificar estereotipos discriminatorios, además de posibles violaciones a los derechos de privacidad y protección de datos personales, entre otros.

Prying Open the Age-Gate: Crafting a Human Rights Statement Against Age Verification Mandates

16:30 - 17:30, Room 401 
David Greene, Senior Staff Attorney and Civil Liberties Director
Host institutions: Electronic Frontier Foundation (EFF), Open Net, Software Freedom Law Centre, EDRi

The session will engage participants in considering the issues and seeding the drafting of a global human rights statement on online age verification mandates. After a background presentation on various global legal models to challenge such mandates (with the facilitators representing Asia, Africa, Europe, US), participants will be encouraged to submit written inputs (that will be read during the session) and contribute to a discussion. This will be the start of an ongoing effort that will extend beyond RightsCon with the goal of producing a human rights statement that will be shared and endorsed broadly. 

Day 4 (Thursday 27 February) 

Let's talk about the elephant in the room: transnational policing and human rights

10:15 - 11:15, Room 201B
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto

This dialogue focuses on growing trends surrounding transnational policing, which pose new and evolving challenges to international human rights. The session will distill emergent themes, with focal points including expanding informal and formal transnational cooperation and data-sharing frameworks at regional and international levels, the evolving role of borders in the development of investigative methods, and the proliferation of new surveillance technologies including mercenary spyware and AI-driven systems. 

Queer over fear: cross-regional strategies and community resistance for LGBTQ+ activists fighting against digital authoritarianism

11:30 - 12:30, Room 101D
Paige Collings, Senior Speech and Privacy Activist
Host institutions: Access Now, Electronic Frontier Foundation (EFF), De|Center, Fight for the Future

The rise of the international anti-gender movement has seen authorities pass anti-LGBTQ+ legislation that has made the stakes of survival even higher for sexual and gender minorities. This workshop will bring together LGBTQ+ activists from Africa, the Middle East, Eastern Europe, Central Asia and the United States to exchange ideas for advocacy and liberation from the policies, practices and directives deployed by states to restrict LGBTQ+ rights, as well as how these actions impact LGBTQ+ people—online and offline—particularly in regards to online organizing, protest and movement building.

❌
❌