Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Face Scans to Estimate Our Age: Harmful and Creepy AF

23 janvier 2025 à 18:56

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

The Impact of Age Verification Measures Goes Beyond Porn Sites

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Protecting “Free Speech” Can’t Just Be About Targeting Political Opponents

Par : Joe Mullin
22 janvier 2025 à 10:40

The White House executive order “restoring freedom of speech and ending federal censorship,” published Monday, misses the mark on truly protecting Americans’ First Amendment rights. 

The order calls for an investigation of efforts under the Biden administration to “moderate, deplatform, or otherwise suppress speech,” especially on social media companies. It goes on to order an Attorney General investigation of any government activities “over the last 4 years” that are inconsistent with the First Amendment. The order states in part: 

Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.

But noticeably absent from the Executive Order is any commitment to government transparency. In the Santa Clara Principles, a guideline for online content moderation authored by EFF and other civil society groups, we state that “governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the request." This Executive Order doesn’t come close to embracing such a principle. 

The order is also misguided in its time-limited targeting. Informal government efforts to persuade, cajole, or strong-arm private media platforms, also called “jawboning,” have been an aspect of every U.S. government since at least 2011. Any good-faith inquiry into such pressures would not be limited to a single administration. It’s misleading to suggest the previous administration was the only, or even the primary, source of such pressures. This time limit reeks of political vindictiveness, not a true effort to limit improper government actions. 

To be clear, a look back at past government involvement in online content moderation is a good thing. But an honest inquiry would not be time-limited to the actions of a political opponent, nor limited to only past actions. The public would also be better served by a report that had a clear deadline, and a requirement that the results be made public, rather than sent only to the President’s office. Finally, the investigation would be better placed with an inspector general, not the U.S. Attorney General, which implies possible prosecutions. 

As we have written before, the First Amendment forbids the government from coercing private entities to censor speech. This principle has countered efforts to pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication about user speech is unconstitutional; some are beneficial, like when platforms reach out to government agencies as authoritative sources of information. 

For anyone who may have been excited to see a first-day executive order truly focused on free expression, President Trump’s Jan. 20 order is a disappointment, at best. 

VPNs Are Not a Solution to Age Verification Laws

VPNs are having a moment. 

On January 1st, Florida joined 18 other states in implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub blocked access to users in Florida. Residents in the “Free State of Florida” have now lost access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.

At the same time, Google Trends data showed a spike in searches for VPN access across Florida–presumably because users are trying to access the site via VPNs.  

How Did This Happen?

Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access “adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat. 

Despite experts from across civil society flagging concerns about the impact of these laws on both adults’ and children’s rights, politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3

HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age verification requirements face civil penalties and could be subject to lawsuits from the state. 

Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access PORNHUB?” 

Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which could lead to potential breaches and misuse of that information. In a statement to local news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at risk:

Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law. Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.

The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance online. 

How Do VPNs Play a Role? 

Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. 

A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.

With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us all—including the young people they are designed to protect. 

What Can You Do?

If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.

No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms

Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom Foundation, and others.

EFF Statement on U.S. Supreme Court's Decision to Uphold TikTok Ban

Par : David Greene
17 janvier 2025 à 10:49

We are deeply disappointed that the Court failed to require the strict First Amendment scrutiny required in a case like this, which would’ve led to the inescapable conclusion that the government's desire to prevent potential future harm had to be rejected as infringing millions of Americans’ constitutionally protected free speech. We are disappointed to see the Court sweep past the undisputed content-based justification for the law – to control what speech Americans see and share with each other – and rule only based on the shaky data privacy concerns.

The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.

Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards

Par : Jason Kelley
14 janvier 2025 à 12:33

On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”  

But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.

Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.

Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users. 

The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’” 

A design for the Most Wanted CEO playing cards

Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition). 

A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.

As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods. 

Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.) 

TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to  update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines. 

These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.

On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”  

Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation,  uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.  

Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.  

Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton

Par : Jason Kelley
13 janvier 2025 à 16:02

The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.  

The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.  

The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.

1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.  

Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why: 

While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.  

First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.  

Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.

Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway. 

Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.  

Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.  

2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare. 

Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.  

There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy. 

3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.  

More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography. 

It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media. 

4. The Supreme Court Has Rightly Struck Down Similar Laws Before.  

In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web. 

Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

5. There is No Safe, Privacy Protecting Age-Verification Technology. 

The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act. 

* * *

 

The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law 

Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights. 

For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review. 

 

EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes

Update: After this blog post was published (addressing Meta's blog post here), we learned Meta also revised its public "Hateful Conduct" policy in ways EFF finds concerning. We address these changes in this blog post, published January 9, 2025.

In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political topics. 

Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial. 

So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an available tool. Meta does not have to, and should not, choose one system to the exclusion of the other. 

Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence, and sex work.  

Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from perceived “California bias” to perceived “Texas bias.” 

Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not merely a political maneuver to accommodate the upcoming U.S. administration change. 

Kids Online Safety Act Continues to Threaten Our Rights Online: 2024 in Review

Par : Jason Kelley
1 janvier 2025 à 10:26

At times this year, it seemed that Congress was going to give up its duty to protect our rights online—particularly when the Senate passed the dangerous Kids Online Safety Act (KOSA) by a large majority in July. But this legislation, which would chill protected speech and almost certainly result in privacy-invasive age verification requirements for many users to access social media sites, did not pass the House this year, thanks to strong opposition from EFF supporters and others.  

KOSA, first introduced in 2022, would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to content. Congress introduced a number of versions of the bill this year, and we analyzed each of them. Unfortunately, the threat of this legislation still looms over us as we head into 2025, especially now that the bill has passed the Senate. And just a few weeks ago, its authors introduced an amended version to respond to criticisms from some House members.  

Despite its many amendments in 2024, we continue to oppose KOSA. No matter which version becomes final, the bill will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms it identifies.   

Here’s how, and why, we worked to stop KOSA this year, and where the fight stands now.  

New Versions, Same Problems

The biggest problem with KOSA is in its vague “duty of care” requirements. Imposing a duty of care on a broad swath of online services, and requiring them to mitigate specific harms based on the content of online speech, will result in those services imposing age verification and content restrictions. We’ve been critical of KOSA for this reason since it was introduced in 2022. 

In February, KOSA's authors in the Senate released an amended version of the bill, in part as a response to criticisms from EFF and other groups. The updates changed how KOSA regulates design elements of online services and removed some enforcement mechanisms, but didn’t significantly change the duty of care, or the bill’s main effects. The updated version of KOSA would still create a censorship regime that would harm a large number of minors who have First Amendment rights to access lawful speech online, and force users of all ages to verify their identities to access that same speech, as we wrote at the time.  KOSA’s requirements are comparable to cases in which the government tried to prevent booksellers from disseminating certain books; those attempts were found unconstitutional  

Kids Speak Out

The young people who KOSA supporters claim they’re trying to help have spoken up about the bill. In March, we published the results of a survey of young people who gave detailed reasons for their opposition to the bill. Thousands told us how beneficial access to social media platforms has been for them, and why they feared KOSA’s censorship. Too often we’re not hearing from minors in these debates at allbut we should be, because they will be most heavily impacted if KOSA becomes law.  

Young people told us that KOSA would negatively impact their artistic education, their ability to find community online, their opportunity for self-discovery, and the ways that they learn accurate news and other information. To sample just a few of the comments: Alan, a fifteen-year old, wrote,  

I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for!  

More Recent Changes To KOSA Haven’t Made It Better 

In May, the U.S. House introduced a companion version to the Senate bill. This House version modified the bill around the edges, but failed to resolve its fundamental censorship problems. The primary difference in the House version was to create tiers that change how the law would apply to a company, depending on its size.  

These are insignificant changes, given that most online speech happens on just a handful of the biggest platforms. Those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care and would be held to the strictest knowledge standard. 

The other major shift was to update the definition of “compulsive usage” by suggesting it be linked to the Diagnostic and Statistical Manual of Mental Disorders, or DSM. But simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. 

KOSA Passes the Senate

KOSA passed through the Senate in July, though legislators on both sides of the aisle remain critical of the bill.  

A version of KOSA introduced in September, tinkered with the bill again but did not change the censorship requirements. This version replaced language about anxiety and depression with a requirement that apps and websites prevent “serious emotional disturbance.”  

In December, the Senate released yet another version of the bill—this one written with the assistance of X CEO Linda Yaccarino. This version includes a throwaway line about protecting the viewpoint of users as long as those viewpoints are “protected by the First Amendment to the Constitution of the United States.” But user viewpoints were never threatened by KOSA; rather, the bill has always meant to threaten the hosts of the user speech—and it still does.  =

KOSA would allow the FTC to exert control over online speech, and there’s no reason to think the incoming FTC won’t use that power. The nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has promised to protect free speech by “fighting back against the trans agenda,” among other things. KOSA would give the FTC under this or any future administration wide berth to decide what sort of content should be restricted because they view it as harmful to kids. And even if it’s never even enforced, just passing KOSA would likely result in platforms taking down protected speech.  

If KOSA passes, we’re also concerned that it would lead to mandatory age verification on apps and websites. Such requirements have their own serious privacy problems; you can read more about our efforts this year to oppose mandatory online ID in the U.S. and internationally.   

EFF thanks our supporters, who have sent nearly 50,000 messages to Congress on this topic, for helping us oppose KOSA this year. In 2025, we will continue to rally to protect privacy and free speech online.   

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting Online ID Mandates: 2024 In Review

31 décembre 2024 à 10:02

This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts because they censor the internet and burden access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.

Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation” methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3, and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users are minors without imposing age verification?

EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.

Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric surveillance just to access lawful online speech.

EFF’s Work Opposing State Age Verification Bills

Last year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the states have passed at least one of these proposals into law.

Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional, confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in Ohio, Indiana, Utah, and Mississippi enjoined those states’ age verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all users—without restricting their speech.

Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.

California

In January, we submitted public comments opposing an especially vague and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.

In February, we filed a friend-of-the-court brief arguing that California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the AADC’s age-verification provision specifically.

Later in the year, we also filed a letter to California lawmakers opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We declared victory in September when the bill failed to get passed by the legislature.

New York

Similarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as various experts have reported.

Texas

We also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.

In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.

Mississippi

Finally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.

In our June brief for the district court, we once again explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and heavily cited our amicus brief.

Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical resources and support for the very same harms these bills claim to address. In our brief, we urged the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on for vibrant self-expression and crucial support.

Looking Ahead

As 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are already on file.

EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review

29 décembre 2024 à 05:50

This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.

Internet shutdowns

It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.

Repression of speech in times of conflict

But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.

Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.

In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.

Restrictions on content, age, and identity

Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.

While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.  

One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.

Cybercrime

We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Global Age Verification Measures: 2024 in Review

27 décembre 2024 à 13:29

EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.

Kids Experiencing Harm is Not Just an Online Phenomena

In November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.

No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.

The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens. 

Limiting Access for Kids Limits Access for Everyone 

Through this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning  sexual content usually hurt marginalized communities and groups that serve them the most. History shows that over-censorship is inevitable.

This year, Canada also introduced an age verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online won’t help women or young people. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not. 

Without Comprehensive Privacy Protections, These Bills Exacerbate Data Surveillance 

Under mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID

Online age verification is not like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed. Without requiring all parties who may have access to the data to delete that data, such as third-party intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or processing sensitive documents like drivers’ licenses. 

Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website. 

These proposals are coming to the U.S. as well. We analyzed various age verification methods in comments to the New York Attorney General. None of them are both accurate and privacy-protective. 

The Scramble to Find an Effective Age Verification Method Shows There Isn't One

The European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up with criteria for effective age verification. In parallel, the Commission has asked for proposals for a 'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk mitigation obligations under the DSA becomes clearer.

Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these shortcomings, and to explore less invasive approaches to protecting all people from online harms

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Saving the Internet in Europe: Defending Free Expression

19 décembre 2024 à 13:26

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

There’s No Copyright Exception to First Amendment Protections for Anonymous Speech

19 décembre 2024 à 11:22

Some people just can’t take a hint. Today’s perfect example is a group of independent movie distributors that have repeatedly tried, and failed, to force Reddit to give up the IP addresses of several users who posted about downloading movies. 

The distributors claim they need this information to support their copyright claims against internet service provider Frontier Communications, because it might be evidence that Frontier wasn’t enforcing its repeat infringer policy and therefore couldn’t claim safe harbor protections under the Digital Millennium. Copyright Act. Courts have repeatedly refused to enforce these subpoenas, recognizing the distributors couldn’t pass the test the First Amendment requires prior to unmasking anonymous speakers.  

Here's the twist: after the magistrate judge in this case applied this standard and quashed the subpoena, the movie distributors sought review from the district court judge assigned to the case. The second judge also denied discovery as unduly burdensome but, in a hearing on the matter, also said there was no First Amendment issue because the users were talking about copyright infringement. In their subsequent appeal to the Ninth Circuit, the distributors invite the appellate court to endorse the judge’s statement. 

As we explain in an amicus brief supporting Reddit, the court should refuse that invitation. Discussions about illegal activity clearly are protected speech. Indeed, the Supreme Court recently affirmed that even “advocacy of illegal acts” is “within the First Amendment’s core.” In fact, protecting such speech is a central purpose of the First Amendment because it ensures that people can robustly debate civil and criminal laws and advocate for change. 

There is no reason to imagine that this bedrock principle doesn’t apply just because the speech concerns copyright infringementespecially where the speakers aren’t even defendants in the case, but independent third parties. And unmasking Does in copyright cases carries particular risks given the long history of copyright claims being used as an excuse to take down lawful as well as infringing content online. 

We’re glad to see Reddit fighting back against these improper subpoenas, and proud to stand with the company as it stands up for its users. 

UK Politicians Join Organizations in Calling for Immediate Release of Alaa Abd El-Fattah

19 décembre 2024 à 07:06

As the UK’s Prime Minister Keir Starmer and Foreign Secretary David Lammy have failed to secure the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah, UK politicians call for tougher measures to secure Alaa’s immediate return to the UK.

During a debate on detained British nationals abroad in early December, chairwoman of the Commons Foreign Affairs Committee Emily Thornberry asked the House of Commons why the UK has continued to organize industry delegations to Cairo while “the Egyptian government have one of our citizens—Alaa Abd El-Fattah—wrongfully held in prison without consular access.”

In the same debate, Labour MP John McDonnell urged the introduction of a “moratorium on any new trade agreements with Egypt until Alaa is free,” which was supported by other politicians. Liberal Democrat MP Calum Miller also highlighted words from Alaa, who told his mother during a recent prison visit that he had “hope in David Lammy, but I just can’t believe nothing is happening...Now I think either I will die in here, or if my mother dies I will hold him to account.”

Alaa’s mother, mathematician Laila Soueif, has been on hunger strike for 79 days while she and the rest of his family have worked to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs.

Support for Alaa has come from many directions. On December 6, 12 Nobel laureates wrote to Keir Starmer urging him to secure Alaa’s immediate release “Not only because Alaa is a British citizen, but to reanimate the commitment to intellectual sanctuary that made Britain a home for bold thinkers and visionaries for centuries.” The pressure on Labour’s senior politicians has continued throughout the month, with more than 100 MPs and peers writing to David Lammy on December 15 demanding Alaa’ be freed.   

Alaa should have been released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

David Lammy met with Alaa’s family in November and promised to take action. But the UK’s Prime Minister failed to raise the case at the G20 Summit in Brazil when he met with Egypt’s President El-Sisi. 

If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release:

  1. Write to your MP (external link): https://freealaa.net/message-mp 
  2. Join Laila Soueif outside the Foreign Office daily between 10-11am
  3. Share Alaa’s plight on social media using the hashtag #freealaa

The UK Prime Minister and Foreign Secretary’s inaction is unacceptable. Every second counts, and time is running out. The government must do everything it can to ensure Alaa’s immediate and unconditional release.

EFF Statement on U.S. Supreme Court's Decision to Consider TikTok Ban

Par : David Greene
18 décembre 2024 à 12:36

The TikTok ban itself and the DC Circuit's approval of it should be of great concern even to those who find TikTok undesirable or scary. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the U.S. has previously condemned globally.

The U.S. government should not be able to restrict speech—in this case by cutting off a tool used by 170 million Americans to receive information and communicate with the world—without proving with evidence that the tools are presently seriously harmful. But in this case, Congress has required and the DC Circuit approved TikTok’s forced divestiture based only upon fears of future potential harm. This greatly lowers well-established standards for restricting freedom of speech in the U.S. 

So we are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny.

Saving the Internet in Europe: How EFF Works in Europe

16 décembre 2024 à 11:32

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online

Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users. 

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform. 

The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

 The bill doesn’t even require that the impact be a negative one. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

This Bill Could Put A Stop To Censorship By Lawsuit

Par : Joe Mullin
5 décembre 2024 à 13:38

For years now, deep-pocketed individuals and corporations have been turning to civil lawsuits to silence their opponents. These Strategic Lawsuits Against Public Participation, or SLAPPs, aren’t designed to win on the merits, but rather to harass journalists, activists, and consumers into silence by suing them over their protected speech. While 34 states have laws to protect against these abuses, there is still no protection at a federal level. 

Today, Reps. Jamie Raskin (D-MD) and Kevin Kiley (R-CA) introduced the bipartisan Free Speech Protection Act. This bill is the best chance we’ve seen in many years to secure strong federal protection for journalists, activists, and everyday people who have been subject to harassing meritless lawsuits. 

take action

Tell Congress We Don't want a weaponized court system

The Free Speech Protection Act is a long overdue tool to protect against the use of SLAPP lawsuits as legal weapons that benefit the wealthy and powerful. This bill will help everyday Americans of all political stripes who speak out on local and national issues. 

Individuals or companies who are publicly criticized (or even simply discussed) will sometimes use SLAPP suits to intimidate their critics. Plaintiffs who file these suits don’t need to win on the merits, and sometimes they don’t even intend to see the case through. But the stress of the lawsuit and the costly legal defense alone can silence or chill the free speech of defendants. 

State anti-SLAPP laws work. But since state laws are often not applicable in federal court, people and companies can still maneuver to manipulate the court system, filing cases in federal court or in states with weak or nonexistent anti-SLAPP laws. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income of around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, which said things like the landfill “affected our everyday life,” and, “You can’t walk outside, and you cannot breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick, an EFF Award winner,  fought back in court and his reporting remains online, but the legal fees had a big effect on his business. With a strong federal anti-SLAPP law, more writers and publishers will be able to fight back against bullying lawsuits without resorting to crowd-funding. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Congressman Sued His Twitter Critics And Media Outlets 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s weak anti-SLAPP law has enticed many plaintiffs there. 

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper The Fresno Bee, MSNBC, a group of his own constituents, and others. Nearly all of these lawsuits were dropped or dismissed by courts. If a federal anti-SLAPP law were in place, more defendants would have a chance of dismissing such lawsuits early and recouping their legal fees. 

Fast Relief From SLAPPs

The Free Speech Protection Act gives defendants of SLAPP suits a powerful tool to defend themselves.

The bill would allow a defendant sued for speaking out on a matter of public concern to file a special motion to dismiss, which the court must generally decide on within 90 days. If the court grants the speaker-defendant’s motion, the claims are dismissed. In many situations, defendants who prevail on an anti-SLAPP motion will be entitled to have the plaintiff reimburse them for their legal fees. 

take action

Tell Congress to pass the free speech protection act

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We urge Congress to pass The Free Speech Protection Act. 

Tell the Senate: Don’t Weaponize the Treasury Department Against Nonprofits

Par : Jason Kelley
27 novembre 2024 à 14:04

Last week the House of Representatives passed a dangerous bill that would allow the Secretary of Treasury to strip a U.S. nonprofit of its tax-exempt status. If it passes the Senate and is signed into law, H.R. 9495 would give broad and easily abused new powers to the executive branch. Nonprofits would not have a meaningful opportunity to defend themselves, and could be targeted without disclosing the reasons or evidence for the decision. 

This bill is an existential threat to nonprofits of all stripes. Future administrations could weaponize the powers in this bill to target nonprofits on either end of the political spectrum. Even if they are not targeted, the threat alone could chill the activities of some nonprofit organizations.

The bill’s authors have combined this attack on nonprofits, originally written as H.R. 6408, with other legislation that would prevent the IRS from imposing fines and penalties on hostages while they are held abroad. These are separate matters. Congress should separate these two bills to allow a meaningful vote on this dangerous expansion of executive power. No administration should be given this much power to target nonprofits without due process. 

tell your senator

Protect nonprofits

Over 350 civil liberties, religious, reproductive health, immigrant rights, human rights, racial justice, LGBTQ+, environmental, and educational organizations signed a letter opposing the bill as written. Now, we need your help. Tell the Senate not to pass H.R. 9495, the so-called “Stop Terror-Financing and Tax Penalties on American Hostages Act.”

❌
❌