Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review

As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.

All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.

We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.

Public Officials Censoring Comments on Government Social Media Pages

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED

The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.

The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court's new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.

Government Mandates for Platforms to Carry Certain Online Speech

Cases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED  

The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.

In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.

Government Coercion in Social Media Content Moderation

Case: Murthy v. Missouri – DECIDED

The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.

Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.

Some Takeaways From These Three Sets of Cases

The O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.

Three key takeaways emerged from these three sets of cases (across five total cases):

First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.

Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.

Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Government-Mandated Age Verification

Case: Free Speech Coalition v. Paxton – PENDING

Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review

People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.

Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables services to host our speech. EFF was there to fight back on behalf of all internet users.

Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone, including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.

EFF helps thwart effort to repeal Section 230

Members of Congress introduced a bill in May this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.

As we wrote at the time, repealing Section 230 would be a disaster for internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion, and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who remove or curate other users’ posts.

Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims that would be filed. Instead, repealing Section 230 would have solidified the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its dominance.

Thankfully, the bill did not advance, in part because internet users wrote to members of Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big Tech’s dominance without harming users’ online speech.

EFF stands up for users’ speech in courts

Congress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability to read other people’s posts and make use of basic features of online services that benefit all users.

In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to recommend certain content reflects its own speech and thus Section 230’s protections do not apply.

We filed a friend-of-the-court brief in support of TikTok’s request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court to review the case.

In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages, “Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s features.

We filed a friend-of-the-court brief in support of Snap’s effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before the trial court.

EFF supports effort to empower users to customize their online experiences

While lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.

That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a friend-of-the-court brief in support, arguing that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.

The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF to Court: Reject X’s Effort to Revive a Speech-Chilling Lawsuit Against a Nonprofit

This post was co-written by EFF legal intern Gowri Nayar.

X’s lawsuit against the nonprofit Center for Countering Digital Hate is intended to stifle criticism and punish the organization for its reports criticizing the platform’s content moderation practices, and a previous ruling dismissing the lawsuit should be affirmed, EFF and multiple organizations argued in a brief filed this fall. 

X sued the Center for Countering Digital Hate (“CCDH”) in federal court in 2023 in response to its reports, which concluded that X’s practices have facilitated an environment of hate speech and misinformation online. Although X’s suit alleges, among other things, breach of contract and violation of the Computer Fraud and Abuse Act, the case is really about X trying to hold CCDH liable for the public controversy surrounding its moderation practices. At bottom, X is claiming that CCDH damaged the platform by critically reporting on it.

CCDH sought to throw out the case on the merits and under California’s anti-SLAPP statute. The California law allows lawsuits to be dismissed if they are filed in retaliation for someone exercising their free speech rights, known as Strategic Lawsuits Against Public Participation, or SLAPPs. In March, the district court ruled in favor of CCDH, dismissed the case, and found that the lawsuit was a SLAPP.

As the district judge noted, X’s suit “is about punishing the Defendants for their speech.” It was correct to reject X’s contract and CFAA theories and saw them for what they were: grievances with CCDH’s criticisms masquerading as legal claims.

X appealed the ruling to the U.S. Court of Appeals for the Ninth Circuit earlier this year. In September, EFF, along with the ACLU, ACLU of Northern California, and the Knight First Amendment Institute at Columbia University, filed an amicus brief in support of CCDH.        

The amicus brief argues that the Ninth Circuit should not allow X to make use of state contract law and a federal anti-hacking statute to stifle CCDH’s speech. Through this lawsuit, X wants to punish CCDH for publishing reports that highlighted how X’s policies and practices are allowing misinformation and hate speech to thrive on its platform. We also argue against the enforcement of X’s anti-scraping provisions because of how vital scraping is to modern journalism and research.

Lastly, we called on the court to dismiss X’s interpretation of the CFAA because it relied on a legal theory that has already been rejected by courts—including the Ninth Circuit itself—in earlier cases. Allowing the CFAA to be used to criminalize all instances of unauthorized access would run counter to prior decisions and would render illegal large categories of activities such as sharing passwords with friends and family.

Ruling in favor of X in this lawsuit would set a very dangerous precedent for free speech rights and allow powerful platforms to exert undue control over information online. We hope the Ninth Circuit affirms the lower court decision and dismisses this meritless lawsuit.

EFF Lawsuit Discloses Documents Detailing Government’s Social Media Surveillance of Immigrants

Despite rebranding a federal program that surveils the social media activities of immigrants and foreign visitors to a more benign name, the government agreed to spend more than $100 million to continue monitoring people’s online activities, records disclosed to EFF show.

Thousands of pages of government procurement records and related correspondence show that the Department of Homeland Security and its component Immigrations and Customs Enforcement largely continued an effort, originally called extreme vetting, to try to determine whether immigrants posed any threat by monitoring their social media and internet presence. The only real change appeared to be rebranding the program to be known as the Visa Lifecycle Vetting Initiative.

The government disclosed the records to EFF after we filed suit in 2022 to learn what had become of a program proposed by President Donald Trump. The program continued under President Joseph Biden. Regardless of the name used, DHS’s program raises significant free expression and First Amendment concerns because it chills the speech of those seeking to enter the United States and allows officials to target and punish them for expressing views they don’t like.

Yet that appears to be a major purpose of the program, the released documents show. For example, the terms of the contracting request specify that the government sought a system that could:

analyze and apply techniques to exploit publicly available information, such as media, blogs, public hearings, conferences, academic websites, social media websites such as Twitter, Facebook, and Linkedln, radio, television, press, geospatial sources, internet sites, and specialized publications with intent to extract pertinent information regarding individuals.

That document and another one make explicit that one purpose of the surveillance and analysis is to identify “derogatory information” about Visa applicants and other visitors. The vague phrase is broad enough to potentially capture any online expression that is critical of the U.S. government or its actions.

EFF has called on DHS to abandon its online social media surveillance program because it threatens to unfairly label individuals as a threat or otherwise discriminate against them on the basis of their speech. This could include denying people access to the United States for speaking their mind online. It’s also why EFF has supported a legal challenge to a State Department practice requiring people applying for a Visa to register their social media accounts with the government.

The documents released in EFF’s lawsuit also include a telling passage about the controversial program and the government’s efforts to sanitize it. In an email discussing the lawsuit against the State Department’s social media moniker collection program, an ICE official describes the government’s need to rebrand the program, “from what ICE originally referred to as the Extreme Vetting Initiative.”

The official wrote:

On or around July 2017 at an industry day event, ICE sought input from the private sector on the use of artificial intelligence to assist in visa applicant vetting. In the months that followed there was significant pushback from a variety channels, including Congress. As a result, on or around May 2018, ICE modified its strategy and rebranded the concept as the Visa Lifecycle Vetting Project.

Other documents detail the specifics of the contract and bidding process that resulted in DHS awarding $101,155,431.20 to SRA International, Inc., a government contractor that uses a different name after merging with another contractor. The company is owned by General Dynamics.

The documents also detail an unsuccessful effort by a competitor to overturn DHS’s decision to award the contract to SRA, though much of the content of that dispute is redacted.

All of the documents released to EFF are available on DocumentCloud.

EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0

EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed.

Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles.

As Louis Barclay, the developer of Unfollow Everything 1.0, explained:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools.

In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case.

The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. 

But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use.

Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0.

Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.”

Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors.

We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.

The New U.S. House Version of KOSA Doesn’t Fix Its Biggest Problems

An amended version of the Kids Online Safety Act (KOSA) that is being considered this week in the U.S. House is still a dangerous online censorship bill that contains many of the same fundamental problems of a similar version the Senate passed in July. The changes to the House bill do not alter that KOSA will coerce the largest social media platforms into blocking or filtering a variety of entirely legal content, and subject a large portion of users to privacy-invasive age verification. They do bring KOSA closer to becoming law, and put us one step closer to giving government officials dangerous and unconstitutional power over what types of content can be shared and read online. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Reframing the Duty of Care Does Not Change Its Dangerous Outcomes

For years now, digital rights groups, LGBTQ+ organizations, and many others have been critical of KOSA's “duty of care.” While the language has been modified slightly, this version of KOSA still creates a duty of care and negligence standard of liability that will allow the Federal Trade Commission to sue apps and websites that don’t take measures to “prevent and mitigate” various harms to minors that are vague enough to chill a significant amount of protected speech.  

The biggest shift to the duty of care is in the description of the harms that platforms must prevent and mitigate. Among other harms, the previous version of KOSA included anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, “consistent with evidence-informed medical information.” The new version drops this section and replaces it with the "promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” The bill defines “serious emotional disturbance” as “the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.”  

Despite the new language, this provision is still broad and vague enough that no platform will have any clear indication about what they must do regarding any given piece of content. Its updated list of harms could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. It is still likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech—and important resources—about topics like addiction, eating disorders, and bullying. And it will stifle minors who are trying to find their own supportive communities online.  

Kids will, of course, still be able to find harmful content, but the largest platforms—where the most kids are—will face increased liability for letting any discussion about these topics occur. It will be harder for suicide prevention messages to reach kids experiencing acute crises, harder for young people to find sexual health information and gender identity support, and generally, harder for adults who don’t want to risk the privacy- and security-invasion of age verification technology to access that content as well.  

As in the past version, enforcement of KOSA is left up to the FTC, and, to some extent, state attorneys general around the country. Whether you agree with them or not on what encompasses a “diagnosable mental, behavioral, or emotional disorder,”  the fact remains that KOSA's flaws are as much about the threat of liability as about the actual enforcement. As long as these definitions remain vague enough that platforms have no clear guidance on what is likely to cross the line, there will be censorship—even if the officials never actually take action. 

The previous House version of the bill stated that “A high impact online company shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors.” The new version slightly modifies this to say that such a company "shall create and implement its design features to reasonably prevent and mitigate the following harms to minors.” These language changes are superficial; this section still imposes a standard that requires platforms to filter user-generated content and imposes liability if they fail to do so “reasonably.” 

House KOSA Edges Closer to Harmony with Senate Version 

Some of the latest amendments to the House version of KOSA bring it closer in line with the Senate version which passed a few months ago (not that this improves the bill).  

This version of KOSA lowers the bar, set by the previous House version, that determines  which companies would be impacted by KOSA’s duty of care. While the Senate version of KOSA does not have such a limitation (and would affect small and large companies alike), the previous House version created a series of tiers for differently-sized companies. This version has the same set of tiers, but lowers the highest bar from companies earning $2.5 billion in annual revenue, or having 150 million annual users, to companies earning $1 billion in annual revenue, or having 100 million annual users.  

This House version also includes the “filter bubble” portion of KOSA which was added to the Senate version a year ago. This requires any “public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user-generated content” to provide users with an algorithm that uses a limited set of information, such as search terms and geolocation, but not search history (for example). This section of KOSA is meant to push users towards a chronological feed. As we’ve said before, there’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.   

Lastly, the House authors have added language  that the bill would have no actual effect on how platforms or courts interpret the law, but which does point directly to the concerns we’ve raised. It states that, “a government entity may not enforce this title or a regulation promulgated under this title based upon a specific viewpoint of any speech, expression, or information protected by the First Amendment to the Constitution that may be made available to a user as a result of the operation of a design feature.” Yet KOSA does just that: the FTC will have the power to force platforms to moderate or block certain types of content based entirely on the views described therein.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Remains an Unconstitutional Censorship Bill 

KOSA remains woefully underinclusive—for example, Google's search results will not be impacted regardless of what they show young people, but Instagram is on the hook for a broad amount of content—while making it harder for young people in distress to find emotional, mental, and sexual health support. This version does only one important thing—it moves KOSA closer to passing in both houses of Congress, and puts us one step closer to enacting an online censorship regime that will hurt free speech and privacy for everyone.

Court to California: Try a Privacy Law, Not Online Censorship

In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

The Act, The Rulings, and Our Amicus Brief

In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

  1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
  1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
  1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

Good News: No Online Censorship

The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

“There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

More Good News: A Roadmap for Privacy-First Laws

The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

Ninth Circuit Punts on the AADC’s Age-Verification Provision

The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

EFF Presses Federal Circuit To Make Patent Case Filings Public

Federal court records belong to everyone. But one federal court in Texas lets patent litigants treat courts like their own private tribunals, effectively shutting out the public.

When EFF tried to intervene and push for greater access to a patent dispute earlier this year, the U.S. District Court for the Eastern District of Texas rejected our effort.  EFF appealed and last week filed our opening brief with the U.S. Court of Appeals for the Federal Circuit.

EFF is not the only one concerned by the district court’s decision. Several organizations filed friend-of-the-court briefs in support of our appeal. Below, we explain the stakes of this case and why others are concerned about the implications of the district court’s secrecy.  

Courts too often let patent litigants shut out the public

Secrecy in patent litigation is an enduring problem, and EFF has repeatedly pushed for greater transparency by intervening in patent lawsuits to vindicate the public’s right to access judicial records.

But sometimes, courts don’t let us—and instead decide to prioritize corporations’ confidentiality interests over the public’s right to access records filed on the record in the public’s courts.

That’s exactly what happened in Entropic Communications, LLC. v. Charter Commuications, Inc. Entropic, a semiconductor provider, sued Charter, one of the nation’s largest media companies, for allegedly infringing six Entropic patents for cable modem technology. Charter argued that it had a license defense because the patents cover technology required to comply with the industry-leading cable data transmission standard, Data Over Cable Service Interface Specification (DOCSIS). Its argument raises a core patent law question: when is a patent “essential” to a technical standard, and thus encumbered by licensing commitments?

Many of the documents explaining the parties’ positions on this important issue are either completely sealed or heavily redacted, making it difficult for the public to understand their arguments. Worse, the parties themselves decided which documents to prevent the public from viewing.

District court rejects EFF’s effort to make case more transparent

The kind of collusive secrecy in this case is illegal—courts are required to scrutinize every line that a party seeks to redact, to ensure that nothing is kept secret unless it satisfies a rigorous balancing test. Under that test, proponents of secrecy need to articulate a specific reason to seal the document sufficient to outweigh the strong presumption that all filings will be public. The court didn’t do any of that here. Instead, it allowed the parties to seal all documents they deemed “confidential” under a protective order, which applies to documents produced in discovery.

That’s wrong: protective orders do not control whether court filings may be sealed. But unfortunately, the district court’s misuse of these protective orders is extremely common in patent cases in the Eastern District of Texas. In fact, the protective order in this case closely mirrors the “model protective order” created by the court for use in patent cases, which also allows parties to seal court filings free from judicial scrutiny or even the need to explain why they did so.

Those concerns prompted EFF in March to ask the court to allow it to intervene and challenge the sealing practices. The court ruled in May that EFF could not intervene in the case, leaving no one to advocate for the public’s right of access. It further ruled that the sealing practices were legal because local rules and the protective order authorized the parties to broadly make these records secret. The end result? Excessive secrecy that wrongfully precludes public scrutiny over patent cases and decisions in this district.

The district court’s errors in this case creates a bad precedent that undermines a cornerstone of the American justice system: judicial transparency. Without transparency, the public cannot ensure that its courts are acting fairly, eroding public trust in the judiciary.

EFF’s opening brief explains the district court’s errors

EFF disagreed with the district court’s ruling, and last week filed its opening brief challenging the decision. As we explained in our opening brief:

The public has presumptive rights under the common law and First Amendment to access summary judgment briefs and related materials filed by Charter and Entropic. Rather than protect public access, the district court permitted the parties to file vast swaths of material under seal, some of which remains completely secret or is so heavily redacted that EFF cannot understand legal arguments and evidence used in denying Charter’s license defense.

Moreover, the court’s ruling that EFF could not even seek to unseal the documents in the first place sets a dangerous precedent. If the decision is upheld, many court dockets, including those with significant historic and newsworthy materials, could become permanently sealed merely because the public did not try to intervene and unseal records while the case was open.

EFF’s brief argued that:

The district court ignored controlling law and held EFF to an arbitrary timeliness standard that the Fifth Circuit has explicitly rejected—including previously reversing the district court here. Neither controlling law nor the record support the district court’s conclusion that Charter and Entropic would be prejudiced by EFF’s intervention. Troublingly, the district court’s reasoning for denying EFF’s intervention could inhibit the public from coming forward to challenge secrecy in all closed cases.

A successful appeal will open this case to the public and help everyone better understand patent disputes that are filed in the Eastern District of Texas. EFF looks forward to vindicating the public’s right to access records on appeal.

Court transparency advocates file briefs supporting EFF’s appeal

The district court’s ruling raised concerns among the broader transparency community, as multiple organizations filed friend-of-court briefs in support of EFF’s appeal.

The Reporters Committee for Freedom of the Press and 19 media organizations, including the New York Times and ProPublica, filed a brief arguing that the district court’s decision to reject EFF’s intervention could jeopardize access to court records in long-closed cases that have previously led to disclosures showing Purdue Pharmaceutical’s efforts to boost sales of OxyContin and misleading physicians about the drug’s addiction risks. The brief details several other high-profile instances in which sealed court records led to criminal convictions or revealed efforts to cover up the sale of defective products.

“To protect just the sort of journalism described above, the overwhelming weight of authority holds that the press and public may intervene to unseal judicial records months, years, or even decades later—including, as here, where the parties might have hoped a case was over,” the brief argues. “The district court’s contrary ruling was error.”

A group of legal scholars from Stanford Law and the University of California, Berkeley, School of Law filed a brief arguing that the district court inappropriately allowed the parties to decide how to conceal many of the facts in this case via the protective order. The brief, relying on empirical research the scholars undertook to review millions of court dockets, argues that the district court’s secrecy here is part of a larger problem of lax oversight by judges, who too often defer to litigants’ desire to make as much secret as possible.

“Instead of upholding the public’s presumptive right of access to those materials, the court again deferred to the parties’ self-interested desire for secrecy,” the brief argues. “That abdication of judicial duty, both in entering the protective order and in sealing judicial records, not only reflects a stubborn refusal to abide by the rulings of the Fifth Circuit; it represents a stunning disregard for the public’s interest in maintaining an open and transparent court system.”

A third brief filed by Public Citizen and Public Justice underscored the importance of allowing the public to push for greater transparency in sealed court cases. Both organizations actively intervene in court cases to unseal records as part of their broader advocacy to protect the public. Their brief argues that allowing EFF to intervene in the case furthers the public’s longstanding ability to understand and oversee the judicial system. The brief argues:

The public’s right of access to courts is central to the America legal system. Widespread sealing of court records cuts against a storied history of presumptive openness to court proceedings rooted in common law and the First Amendment. It also inhibits transparency in the judicial process, limiting the public’s ability to engage with and trust courts’ decision making.

EFF is grateful for the support these organizations and individuals provided, and we look forward to vindicating the public’s rights of access in this case.

It’s Time For Lawmakers to Listen to Courts: Your Law Regulating Online Speech Will Harm Internet Users’ Free Speech Rights

Despite a long history of courts ruling that government efforts to regulate speech online harm all internet users and interfere with their First Amendment rights, state and federal lawmakers continue to pass laws that do just that. Three separate rulings issued in the past week show that the results of these latest efforts are as predictable as they are avoidable: courts will strike down these laws.

The question is, why aren’t lawmakers listening? Instead of focusing on passing consumer privacy legislation that attacks the harmful business practices of the most dominant online services, lawmakers are seeking to censor the internet or block young people from it. Instead of passing laws that increase competition and help usher in a new era of online services and interoperability, lawmakers are trying to force social media platforms to host specific viewpoints. 

Recent decisions by the Supreme Court and two federal district courts underscore how these laws, in addition to being unconstitutional, are also bad policy. Whatever the good intentions of lawmakers, laws that censor the internet directly harm people’s ability to speak online, access others’ speech, remain anonymous, and preserve their privacy.

The consistent rulings by these courts should send a clear message to lawmakers considering internet legislation: it’s time to focus on advancing legislation that solves some of the most pressing privacy and competition problems online without censoring users’ speech. Those proposals have the benefit of both being constitutional and addressing many of the harms people—both adults and young people—experience online. Let’s take a look at each of these cases.

Court Puts Mississippi Law on Hold, Highlights KOSA’s Unconstitutionality

A federal court on Monday blocked Mississippi from enforcing its children’s online safety law (House Bill 1126), ruling that it violates the First Amendment rights of adults and young people. The law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from being exposed to “harmful” material.

EFF filed a friend-of-the-court brief in the case that highlighted the many ways in which the law burdened adults’ ability to access lawful speech online, chilled anonymity online, and threatened their data privacy.

The district court agreed with EFF, ruling that “the Act requires all users (both adults and minors) to verify their ages before creating an account to access a broad range of protected speech on a broad range of covered websites. This burdens adults’ First Amendment rights.”

The court’s ruling also demonstrates the peril Congress faces should it advance the Kids Online Safety Act. Both chambers have drafted slightly different versions of KOSA, but the core provision of both bills would harm internet usersespecially young people—by censoring a large swath of protected speech online.

EFF has previously explained in detail why KOSA will block everyone’s ability to access information online in ways that violate the First Amendment. The Mississippi court’s ruling earlier this week confirms that KOSA is unconstitutional, as the law contains similar problematic provisions.

Both Mississippi HB 1126 and KOSA include a provision that imposes liability on social media services that fail to “prevent and mitigate” exposing young people to several categories of content that the measures deem to be harmful. And both bills include a carveout that says a service will not face liability if a young person independently finds the material or searches for it.

The district court ruled that these “monitoring-and-censorship requirements” violated the First Amendment. First, the court found that the provision created restrictions on what content could be accessed online and thus triggered strict review under the First Amendment. Next, the court found that the provision fell far short of complying with the First Amendment because it doesn’t effectively prevent the harms to minors that Mississippi claims justify the law.

In short, if lawmakers believe they have a compelling interest in blocking certain content from minors online, the carveout provisions of KOSA and HB 1126 undercut their claims that such information is inherently harmful to minors. The First Amendment prevents lawmakers from engaging in such half-measures precisely because those proposals chill vast amounts of lawful speech while being inherently ineffective at addressing the harms that animated enacting them in the first place.

Another aspect of the court’s decision putting HB 1126 on hold should also serve as a warning to KOSA’s proponents. The Mississippi court ruled that the state law also ran afoul of the First Amendment because it treated online services differently based on the type of content they host.

HB 1126 broadly regulates social media services that allow users to create and post content and interact with others. But the law exempts other online services that “provide a user with access to news, sports, commerce, online video games or content primarily generated or selected by the digital service provider.”

The district court ruled that HB 1126’s exemption of certain online services based on the content subjected the law to the First Amendment’s strict requirements when lawmakers seek to regulate the content of lawful speech.

“The facial distinction in H.B. 1126 based on the message the digital service provider conveys, or the more subtle content-based restriction based upon the speech’s function or purpose, makes the Act content-based, and therefore subject to strict scrutiny,” the court wrote.

KOSA contains a similar set of carveouts in its definitions. The bill would apply to online services that are likely to be used by minors but exempts news and sports websites and related services. KOSA will thus similarly be subjected to strict review under the First Amendment and, as EFF has said repeatedly, will likely fall far short of meeting the Constitution’s requirements.

Indiana Court Reaffirms That Age-Verification Schemes Violate the First Amendment

An Indiana federal court’s decision to block the state’s age-verification law highlights the consensus that such measures violate the First Amendment because they harm adults’ ability to access protected speech and burden their rights to anonymity and privacy. The decision casts significant doubt on similar bills being contemplated across the country, including in California.

The Indiana law requires an online service in which more than one-third of the content hosted includes adult sexual materials to verify the ages of its users and block young people from that material. The age-verification mandate required services to obtain government-issued identification from users or to have users submit to invasive methods to verify their age, such as providing personal information or facial recognition.

The court ruled that Indiana’s law was unconstitutional because it placed immense burdens on adults’ rights to access “a significant amount of speech protected by the First Amendment.” In particular, the law would require general-purpose websites that serve a variety of users and host a variety of content to implement age verification for all users if a third of the content featured sexual material.

As a result, users who visited that site but never accessed the sexual content would still have to verify their age. “Indeed, the Act imposes burdens on adults accessing constitutionally protected speech even when the majority of a website contains entirely acceptable, and constitutionally protected, material,” the court wrote.

Conversely, young people who have a First Amendment right to access the majority of non-sexual content on that site would not be able to.

The Indiana court’s decision is in keeping with more than two decades’ worth of rulings by the Supreme Court and lower courts that have found age-verification laws to be unconstitutional. What’s remarkable is that, despite this clearly developed law, states across the country continue to try to pass these laws.

Lawmakers should heed these courts’ consistent message and work on finding other ways to address harms to children online, such as by passing comprehensive data privacy laws, rather than continuing to pass laws that courts will strike down.

Supreme Court Confirms that Laws Targeting Content Moderation Will Face First Amendment Challenges, But Data Privacy and Competition Laws are Fair Game

The Supreme Court’s ruling this week in a pair of cases challenging states’ online content moderation laws should also serve as a wakeup call to lawmakers. If a state or Congress wants to pass a law that requires or coerces an online service to modify how it treats users’ speech, it will face an uphill battle to being constitutional.

Although EFF plans to publish an in-depth analysis of the decision soon, the court’s decision confirms what EFF has been saying for years: the First Amendment limits lawmakers ability to dictate what type of content online services host. And although platforms often make bad or inconsistent content moderation decisions, users are best served when private services—not the government—make those choices.

Importantly, the Supreme Court also confirmed something else EFF has long said: the First Amendment is not a barrier to lawmakers enacting measures that target dominant social media companies’ invasive privacy practices or their anti-competitive behavior.

Comprehensive consumer data privacy laws that protect all internet users are both much needed and can be passed consistent with the First Amendment.

The same is true for competition laws. Lawmakers can pass measures that instill greater competition for users and end the current companies’ dominance. Also, laws could allow for the development and growth of a variety of third-party services that can interoperate with major social media companies and provide options for users that the major companies do not.

The Supreme Court’s decision thus reinforces that lawmakers have many paths to addressing many of the harms occurring online, and that they can do so without violating the First Amendment. EFF hopes that lawmakers will take up this opportunity, and we continue to be ready to help lawmakers pass pro-competition and consumer data privacy laws.

Mississippi Can’t Wall Off Everyone’s Social Media Access to Protect Children

In what is becoming a recurring theme, Mississippi became the latest state to pass a law requiring social media services to verify users’ ages and block lawful speech to young people. Once again, EFF explained to the court why the law is unconstitutional.

Mississippi’s law (House Bill 1126) requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from being exposed to “harmful” material. NetChoice, the trade association that represents some of the largest social media services, filed suit and sought to block the law from going into effect in July.

EFF submitted a friend-of-the-court brief in support of NetChoice’s First Amendment challenge to the statute to explain how invasive and chilling online age verification mandates can be. “Such restrictions frustrate everyone’s ability to use one of the most expressive mediums of our time—the vast democratic forums of the internet that we all use to create art, share photos with loved ones, organize for political change, and speak,” the brief argues.

Online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their identification in physical spaces, EFF’s brief argues:

Unlike in-person age-gates, online age restrictions like Mississippi’s require all users to submit, not just momentarily display, data-rich government-issued identification or other proof-of-age, and in some commercially available methods, a photo.

The differences in online age verification create significant burdens on adults’ ability to access lawful speech online. Most troublingly, age verification requirements can completely block millions of U.S. adults who don’t have government-issued identification or lack IDs that would satisfy Mississippi’s verification requirements, such as by not having an up-to-date address or current legal name.

“Certain demographics are also disproportionately burdened when government-issued ID is used in age verification,” EFF’s brief argues. “Black Americans and Hispanic Americans are disproportionately less likely to have current and up-to-date driver’s licenses. And 30% of Black Americans do not have a driver’s license at all.”

Moreover, relying on financial and credit records to verify adults’ identities can also exclude large numbers of adults. As EFF’s brief recounts, some 20 percent of U.S. households do not have a credit card and 35 percent do not own a home.

The data collection required by age-verification systems can also deter people from using social media entirely, either because they want to remain anonymous online or are concerned about the privacy and security of any data they must turn over. HB 1126 thus burdens people’s First Amendment rights to anonymity and their right to privacy.

Regarding HB 1126’s threat to anonymity, EFF’s brief argued:

The threats to anonymity are real and multilayered. All online data is transmitted through a host of intermediaries. This means that when a website shares identifying information with its third-party age-verification vendor, that data is not only transmitted between the website and the vendor, but also between a series of third parties. Under the plain language of HB 1126, those intermediaries are not required to delete users’ identifying data and, unlike the digital service providers themselves, they are also not restricted from sharing, disclosing, or selling that sensitive data.

Regarding data privacy and security, EFF’s brief argued:

The personal data that HB 1126 requires platforms to collect or purchase is extremely sensitive and often immutable. By exposing this information to a vast web of websites and intermediaries, third-party trackers, and data brokers, HB 1126 poses the same concerns to privacy-concerned internet users as it does to the anonymity-minded users.

Finally, EFF’s brief argues that although HB 1126 contains data privacy protections for children that are laudable, they cannot be implemented without the state first demanding that every user verify their age so that services can apply those privacy protections to children. As a result, the state cannot enforce those provisions.

EFF’s brief notes, however, that should Mississippi pass “comprehensive data privacy protections, not attached to content-based, speech-infringing, or privacy-undermining schemes,” that law would likely be constitutional.

EFF remains ready to support Mississippi’s effort to protect all its residents’ privacy. HB 1126, however, unfortunately seeks to provide only children with privacy protections we all desperately need while at the same time restricting adults and children’s access to lawful speech on social media.

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

EFF Appeals Order Denying Public Access to Patent Filings

It’s bad enough when a patent holder enforcing their rights in court try to exclude the public from those fights. What’s even worse is when courts endorse these secrecy tactics, just as a federal court hearing an EFF unsealing motion ruled in May. 

EFF continues to push for greater transparency in the case, Entropic Communications, LLC v. Charter Communications, Inc.,  and is asking a federal court of appeals to reverse the decision. A successful appeal will open this case to the public, and help everyone better understand patent disputes that are filed in the U.S. District Court for the Eastern District of Texas.

Secrecy in patent litigation is an enduring problem, and EFF has repeatedly intervened in lawsuits involving patent claims to uphold the public’s right to access court records. And in this case, the secrecy issues are heightened by the parties and the court believing that they can jointly agree to keep entire records under seal, without ever having to justify the secrecy. 

This case is a dispute between a semiconductor products provider, Entropic, and one of the nation's largest media companies, Charter, which offers cable television and internet service to millions of people. Entropic alleged that Charter infringed its patents (U.S. Patent Nos. 8,223,775; 8,284,690; 8,792,008; 9,210,362; 9,825,826; and 10,135,682) which cover cable modem technology. 

Charter has argued it had a license defense to the patent claims based on the industry-leading cable data transmission standard, Data Over Cable Service Interface Specification (DOCSIS). The argument could raise a core legal question in patent law: when is a particular patent “essential” to a technical standard and thus encumbered by licensing commitments?  

But so many of the documents filed in court about this legal argument are heavily redacted, making it difficult to understand. EFF filed to intervene and unseal these documents in March. EFF’s motion in part targeted a practice that is occurring in many patent disputes in the Texas district court, whereby parties enter into agreements, known as protective orders. These agreements govern how parties will protect information they exchange during the fact-gathering portion of a case. 

Under the terms of the model protective order created by the court, the parties can file documents they agree are secret under seal without having to justify that such secrecy overrides the public’s right to access court records. 

Despite federal appellate courts repeatedly ruling that protective orders cannot short-circuit the public’s right of access, the district court ruled that the documents EFF sought to unseal could remain secret precisely because the parties had agreed. Additionally, the district court ruled that EFF had no right to seek to unseal the records because it filed the motion to intervene and make the records public four months after the parties had settled. 

EFF is disappointed by the decision and strongly disagrees. Notably, the opinion does not cite any legal authority that allows parties to stipulate to keep their public court fights secret. As said above, many courts have ruled that such agreements are anathema to court transparency. 

Moreover, the court’s ruling that EFF could not even seek to unseal the documents in the first place sets a dangerous precedent. As a result many court dockets, including those with significant historic and newsworthy materials, can become permanently sealed merely because the public did not try to intervene and unseal records while the case was open. 

That outcome turns the public’s right of access to court records on its head: it requires the public to be extremely vigilant about court secrecy and punishes them for not knowing about sealed records. Yet the entire point of the presumption of public access is that judges and litigants in the cases are supposed to protect the public’s right to open courts, as not every member of the public has the time and resources to closely monitor court proceedings and hire a lawyer to enforce their public rights should they be violated.

EFF looks forward to vindicating the public’s right to access records on appeal. 

Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech 

As Congress appears ready to gut one of the internet’s most important laws for protecting free speech, they are ignoring how that law protects and benefits millions of Americans’ ability to speak online every day.  

The House Energy and Commerce Committee is holding a hearing on Wednesday on a bill that would end Section 230 (47 U.S.C. § 230) in 18 months. The authors of the bill argue that setting a deadline to either change or eliminate Section 230 will force the Big Tech online platforms to the bargaining table to create a new regime of intermediary liability. 

Take Action

Ending Section 230 Will Make Big Tech Monopolies Worse

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it needs to abandon this bill and spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

The bill introduced by House Energy & Commerce Chair Cathy McMorris Rogers (R-WA) and Ranking Member Frank Pallone (D-NJ) is based on a series of mistaken assumptions and fundamental misunderstandings about Section 230. Mike Masnick at TechDirt has already explained many of the flawed premises and factual errors that the co-sponsors have made. 

We won’t repeat the many errors that Masnick identifies. Instead, we want to focus on what we see as a glaring omission in the co-sponsor’s argument: how central Section 230 is to ensuring that every person can speak online.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most American’s belief that people should be held responsible for their own speech—not that of other people.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services. 

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. 

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

The co-sponsors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

Nothing could be further from the truth. If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. Taking the sponsors of the bill at their word, this would result in the exact opposite of their goal of protecting children and adults from harmful content online.  

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog

This post was co-written by EFF legal intern Melda Gurakar.

Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled.

The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so.

Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute.

The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X.

The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers.

EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information.

The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions.

EFF Seeks Greater Public Access to Patent Lawsuit Filed in Texas

You’re not supposed to be able to litigate in secret in the U.S. That’s especially true in a patent case dealing with technology that most internet users rely on every day.

 Unfortunately, that’s exactly what’s happening in a case called Entropic Communications, LLC v. Charter Communications, Inc. The parties have made so much of their dispute secret that it is hard to tell how the patents owned by Entropic might affect the Data Over Cable Service Interface Specifications (DOCSIS) standard, a key technical standard that ensures cable customers can access the internet.

In Entropic, both sides are experienced litigants who should know that this type of sealing is improper. Unfortunately, overbroad secrecy is common in patent litigation, particularly in cases filed in the U.S. District Court for the Eastern District of Texas.

EFF has sought to ensure public access to lawsuits in this district for years. In 2016, EFF intervened in another patent case in this very district, arguing that the heavy sealing by a patent owner called Blue Spike violated the public’s First Amendment and common law rights. A judge ordered the case unsealed.

As Entropic shows, however, parties still believe they can shut down the public’s access to presumptively public legal disputes. This secrecy has to stop. That’s why EFF, represented by the Science, Health & Information Clinic at Columbia Law School, filed a motion today seeking to intervene in the case and unseal a variety of legal briefs and evidence submitted in the case. EFF’s motion argues that the legal issues in the case and their potential implications for the DOCSIS standard are a matter of public concern and asks the district court judge hearing the case to provide greater public access.

Protective Orders Cannot Override The Public’s First Amendment Rights

As EFF’s motion describes, the parties appear to have agreed to keep much of their filings secret via what is known as a protective order. These court orders are common in litigation and prevent the parties from disclosing information that they obtain from one another during the fact-gathering phase of a case. Importantly, protective orders set the rules for information exchanged between the parties, not what is filed on a public court docket.

The parties in Entropic, however, are claiming that the protective order permits them to keep secret both legal arguments made in briefs filed with the court as well as evidence submitted with those filings. EFF’s motion argues that this contention is incorrect as a matter of law because the parties cannot use their agreement to abrogate the public’s First Amendment and common law rights to access court records. More generally, relying on protective orders to limit public access is problematic because parties in litigation often have little interest or incentive to make their filings public.

Unfortunately, parties in patent litigation too often seek to seal a variety of information that should be public. EFF continues to push back on these claims. In addition to our work in Texas, we have also intervened in a California patent case, where we also won an important transparency ruling. The court in that case prevented Uniloc, a company that had filed hundreds of patent lawsuits, from keeping the public in the dark as to its licensing activities.

That is why part of EFF’s motion asks the court to clarify that parties litigating in the Texas district court cannot rely on a protective order for secrecy and that they must instead seek permission from the court and justify any claim that material should be filed under seal.

On top of clarifying that the parties’ protective orders cannot frustrate the public’s right to access federal court records, we hope the motion in Entropic helps shed light on the claims and defenses at issue in this case, which are themselves a matter of public concern. The DOCSIS standard is used in virtually all cable internet modems around the world, so the claims made by Entropic may have broader consequences for anyone who connects to the internet via a cable modem.

It’s also impossible to tell if Entropic might want to sue more cable modem makers. So far, Entropic has sued five big cable modem vendors—Charter, Cox, Comcast, DISH TV, and DirecTV—in more than a dozen separate cases. EFF is hopeful that the records will shed light on how broadly Entropic believes its patents can reach cable modem technology.

EFF is extremely grateful that Columbia Law School’s Science, Health & Information Clinic could represent us in this case. We especially thank the student attorneys who worked on the filing, including Sean Hong, Gloria Yi, Hiba Ismail, and Stephanie Lim, and the clinic’s director, Christopher Morten.

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media

EFF legal intern Jack Beck contributed to this post.

A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users.

At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke.

The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals.

Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent.

In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content.

We made two arguments in our brief supporting Snap’s appeal.

First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances.

Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down.

There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter.

If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.

Analyzing KOSA’s Constitutional Problems In Depth 

Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

The Foilies 2024

Recognizing the worst in government transparency.

The Foilies are co-written by EFF and MuckRock and published in alternative newspapers around the country through a partnership with the Association of Alternative Newsmedia

We're taught in school about checks and balances between the various branches of government, but those lessons tend to leave out the role that civilians play in holding officials accountable. We're not just talking about the ballot box, but the everyday power we all have to demand government agencies make their records and data available to public scrutiny.

At every level of government in the United States (and often in other countries), there are laws that empower the public to file requests for public records. They go by various names—Freedom of Information, Right-to-Know, Open Records, or even Sunshine laws—but all share the general concept that because the government is of the people, its documents belong to the people. You don't need to be a lawyer or journalist to file these; you just have to care.

It's easy to feel powerless in these times, as local newsrooms close, and elected officials embrace disinformation as a standard political tool. But here's what you can do, and we promise it'll make you feel better: Pick a local agency—it could be a city council, a sheriff's office or state department of natural resources—and send them an email demanding their public record-request log, or any other record showing what requests they receive, how long it took them to respond, whether they turned over records, and how much they charged the requester for copies. Many agencies even have an online portal that makes it easier, or you can use MuckRock’s records request tool. (You can also explore other people's results that have been published on MuckRock's FOIA Log Explorer.) That will send the message to local leaders they're on notice. You may even uncover an egregious pattern of ignoring or willfully violating the law.

The Foilies are our attempt to call out these violations each year during Sunshine Week, an annual event (March 10-16 this year) when advocacy groups, news organizations and citizen watchdogs combine efforts to highlight the importance of government transparency laws. The Electronic Frontier Foundation and MuckRock, in partnership with the Association of Alternative Newsmedia, compile the year's worst and most ridiculous responses to public records requests and other attempts to thwart public access to information, including through increasing attempts to gut the laws guaranteeing this access—and we issue these agencies and officials tongue-in-cheek "awards" for their failures.

Sometimes, these awards actually make a difference. Last year, Mendocino County in California repealed its policy of charging illegal public records fees after local journalists and activists used The Foilies’ "The Transparency Tax Award" in their advocacy against the rule.

This year marks our 10th annual accounting of ridiculous redactions, outrageous copying fees, and retaliatory attacks on requesters—and we have some doozies for the ages.

The "Winners"

The Not-So-Magic Word Award: Augusta County Sheriff’s Office, Va.

Public records laws exist in no small part because corruption, inefficiency and other malfeasance happen, regardless of the size of the government. The public’s right to hold these entities accountable through transparency can prevent waste and fraud.

Of course, this kind of oversight can be very inconvenient to those who would like a bit of secrecy. Employees in Virginia’s Augusta County thought they’d found a neat trick for foiling Virginia's Freedom of Information Act.

Consider: “NO FOIA”

In an attempt to withhold a bunch of emails they wanted to hide from the public eye, employees in Augusta County began tagging their messages with “NO FOIA,” as an apparent incantation staff believed could ward off transparency. Of course, there are no magical words that allow officials to evade transparency laws; the laws assume all government records are public, so agencies can’t just say they don’t want records released.

Fortunately, at least one county employee thought that breaking the law must be a little more complicated than that, and this person went to Breaking Through News to blow the whistle.

Breaking Through News sent a FOIA request for those “NO FOIA” emails. The outlet received just 140 emails of the 1,212 that the county indicated were responsive, and those released records highlighted the county’s highly suspect approach to withholding public records. Among the released records were materials like the wages for the Sheriff Office employees (clearly a public record), the overtime rates (clearly a public record) and a letter from the sheriff deriding the competitive wages being offered at other county departments (embarrassing but still clearly a public record). 

Other clearly public records, according to a local court, included recordings of executive sessions that the commissioners had entered illegally, which Breaking Through News learned about through the released records. They teamed up with the Augusta Free Press to sue for access to the recordings, a suit they won last month. They still haven’t received the awarded records, and it’s possible that Augusta County will appeal. Still, it turned out that, thanks to the efforts of local journalists, their misguided attempt to conjure a culture of “No FOIA” in August County actually brought them more scrutiny and accountability.

The Poop and Pasta Award: Richlands, Va.
Spaghetti noodles spilling out of a mailbox.

Government officials retaliated against a public records requester by filling her mailbox with noodles.

In 2020, Laura Mollo of Richlands, Va., discovered that the county 911 center could not dispatch Richlands residents’ emergency calls: While the center dispatched all other county 911 calls, calls from Richlands had to be transferred to the Richlands Police Department to be handled. After the Richlands Town Council dismissed Mollo’s concerns, she began requesting records under the Virginia Freedom of Information Act. The records showed that Richlands residents faced lengthy delays in connecting with local emergency services. On one call, a woman pleaded for help for her husband, only to be told that county dispatch couldn’t do anything—and her husband died during the delay. Other records Mollo obtained showed that Richlands appeared to be misusing its resources.

You would hope that public officials would be grateful that Mollo uncovered the town’s inadequate emergency response system and budget mismanagement. Well, not exactly: Mollo endured a campaign of intimidation and harassment for holding the government accountable. Mollo describes how her mailbox was stuffed with cow manure on one occasion, and spaghetti on another (which Mollo understood to be an insult to her husband’s Italian heritage). A town contractor harassed her at her home; police pulled her over; and Richlands officials even had a special prosecutor investigate her.

But this story has a happy ending: In November 2022, Mollo was elected to the Richlands Town Council. The records she uncovered led Richlands to change over to the county 911 center, which now dispatches Richlands residents’ calls. And in 2023, the Virginia Coalition for Open Government recognized Mollo by awarding her the Laurence E. Richardson Citizen Award for Open Government. Mollo’s recognition is well-deserved. Our communities are indebted to people like her who vindicate our right to public records, especially when they face such inexcusable harassment for their efforts.

The Error 404 Transparency Not Found Award: FOIAonline

In 2012, FOIAonline was launched with much fanfare as a way to bring federal transparency into the late 20th century. No longer would requesters have to mail or fax requests. Instead, FOIAonline was a consolidated starting point, managed by the Environmental Protection Agency (EPA), that let you file Freedom of Information Act requests with numerous federal entities from within a single digital interface.

Even better, the results of requests would be available online, meaning that if someone else asked for interesting information, it would be available to everyone, potentially reducing the number of duplicate requests. It was a good idea—but it was marred from the beginning by uneven uptake, agency infighting, and inscrutable design decisions that created endless headaches. In its latter years, FOIAonline would go down for days or weeks at a time without explanation. The portal saw agency after agency ditch the platform in favor of either homegrown solutions or third-party vendors.

Last year, the EPA announced that the grand experiment was being shuttered, leaving thousands of requesters uncertain about how and where to follow up on their open requests, and unceremoniously deleting millions of documents from public access without any indication of whether they would be made available again.

In a very on-brand twist of the knife, the decision to sunset FOIAonline was actually made two years prior, after an EPA office reported in a presentation that the service was likely to enter a “financial death spiral” of rising costs and reduced agency usage. Meanwhile, civil-society organizations such as MuckRock, the Project on Government Oversight, and the Internet Archive have worked to resuscitate and make available at least some of the documents the site used to host.

The Literary Judicial Thrashing of the Year Award: Pennridge, Penn., School District

Sometimes when you're caught breaking the law, the judge will throw the book at you. In the case of Pennridge School District in Bucks County, Penn. Judge Jordan B. Yeager catapulted an entire shelf of banned books at administrators for violating the state's Right-to-Know Law.

The case begins with Darren Laustsen, a local parent who was alarmed by a new policy to restrict access to books that deal with “sexualized content,” seemingly in lockstep with book-censorship laws happening around the country. Searching the school library's catalog, he came across a strange trend: Certain controversial books that appeared on other challenged-book lists had been checked out for a year or more. Since students are only allowed to check out books for a week, he (correctly) suspected that library staff were checking them out themselves to block access.

So he filed a public records request for all books checked out by non-students. Now, it's generally important for library patrons to have their privacy protected when it comes to the books they read—but it's a different story if public employees are checking out books as part of their official duties and effectively enabling censorship. The district withheld the records, provided incomplete information, and even went so far as to return books and re-check them out under a student's account in order to obscure the truth. And so Laustsen sued.

The judge issued a scathing and literarily robust ruling: “In short, the district altered the records that were the subject of the request, thwarted public access to public information, and effectuated a cover-up of faculty, administrators, and other non-students’ removal of books from Pennridge High School’s library shelves." The opinion was peppered with witty quotes from historically banned books, including Nineteen Eighty-Four, Alice in Wonderland, The Art of Racing in the Rain and To Kill a Mockingbird. After enumerating the district's claims that later proved to be inaccurate, he cited Kurt Vonnegut's infamous catchphrase from Slaughterhouse-Five: "So it goes."

The Photographic Recall Award: Los Angeles Police Department

Police agencies seem to love nothing more than trumpeting an arrest with an accompanying mugshot—but when the tables are turned, and it’s the cops’ headshots being disclosed, they seem to lose their minds and all sense of the First Amendment.

This unconstitutional escapade began (and is still going) after a reporter and police watchdog published headshots of Los Angeles Police Department officers, which they lawfully obtained via a public records lawsuit. LAPD cops and their union were furious. The city then sued the reporter, Ben Camacho, and the Stop LAPD Spying Coalition, demanding that they remove the headshots from the internet and return the records to LAPD.

You read that right: After a settlement in a public records lawsuit required the city to disclose the headshots, officials turned around and sued the requester for, uh, disclosing those same records, because the city claimed it accidentally released pictures of undercover cops.

But it gets worse: Last fall, a trial court denied a motion to throw out the city’s case seeking to claw back the images; Camacho and the coalition have appealed that decision and have not taken the images offline. And in February, the LAPD sought to hold Camacho and the coalition liable for damages it may face in a separate lawsuit brought against it by hundreds of police officers whose headshots were disclosed.

We’re short on space, but we’ll try explain the myriad ways in which all of the above is flagrantly unconstitutional: The First Amendment protects Camacho and the coalition’s ability to publish public records they lawfully obtained, prohibits courts from entering prior restraints that stop protected speech, and limits the LAPD’s ability to make them pay for any mistakes the city made in disclosing the headshots. Los Angeles officials should be ashamed of themselves—but their conduct shows that they apparently have no shame.

The Cops Anonymous Award: Chesterfield County Police Department, Va.

The Chesterfield County Police Department in Virginia refused to disclose the names of hundreds of police officers to a public records requester on this theory: Because the cops might at some point go undercover, the public could never learn their identities. It’s not at all dystopian to claim that a public law enforcement agency needs to have secret police!

Other police agencies throughout the state seem to deploy similar secrecy tactics, too.

The Keep Your Opinions to Yourself Award: Indiana Attorney General Todd Rokita

In March 2023, Indiana Attorney General Todd Rokita sent a letter to medical providers across the state demanding information about the types of gender-affirming care they may provide to young Hoosiers. But this was no unbiased probe: Rokita made his position very clear when he publicly blasted these health services as “the sterilization of vulnerable children” that “could legitimately be considered child abuse.” He made claims to the media that the clinics’ main goals weren’t to support vulnerable youth, but to rake in cash.

Yet as loud as he was about his views in the press, Rokita was suddenly tight-lipped once the nonprofit organization American Oversight filed a public records request asking for all the research, analyses and other documentation that he used to support his claims. Although his agency located 85 documents that were relevant to their request, Rokita refused to release a single page, citing a legal exception that allows him to withhold deliberative documents that are “expressions of opinion or are of a speculative nature.”

Perhaps if Rokita’s opinions on gender-affirming care weren't based on facts, he should've kept those opinions and speculations to himself in the first place.

The Failed Sunshine State Award: Florida Gov. Ron DeSantis

Florida’s Sunshine Law is known as one of the strongest in the nation, but Gov. Ron DeSantis spent much of 2023 working, pretty successfully, to undermine its superlative status with a slew of bills designed to weaken public transparency and journalism.

In March, DeSantis was happy to sign a bill to withhold all records related to travel done by the governor and a whole cast of characters. The law went into effect just more than a week before the governor announced his presidential bid. In addition, DeSantis has asserted his “executive privilege” to block the release of public records in a move that, according to experts like media law professor Catherine Cameron, is unprecedented in Florida’s history of transparency.

DeSantis suspended his presidential campaign in January. That may affect how many trips he’ll be taking out-of-state in the coming months, but it won’t undo the damage of his Sunshine-slashing policies.

Multiple active lawsuits are challenging DeSantis over his handling of Sunshine Law requests. In one, The Washington Post is challenging the constitutionality of withholding the governor’s travel records. In that case, a Florida Department of Law Enforcement official last month claimed the governor had delayed the release of his travel records. Nonprofit watchdog group American Oversight filed a lawsuit in February, challenging “the unjustified and unlawful delay” in responding to requests, citing a dozen records requests to the governor’s office that have been pending for one to three years.

“It’s stunning, the amount of material that has been taken off the table from a state that many have considered to be the most transparent,” Michael Barfield, director of public access for the Florida Center for Government Accountability (FCGA), told NBC News. The FCGA is now suing the governor’s office for records on flights of migrants to Massachusetts. “We’ve quickly become one of the least transparent in the space of four years.”

The Self-Serving Special Session Award: Arkansas Gov. Sarah Huckabee Sanders

By design, FOIA laws exist to help the people who pay taxes hold the people who spend those taxes accountable. In Arkansas, as in many states, taxpayer money funds most government functions: daily office operations, schools, travel, dinners, security, etc. As Arkansas’ governor, Sarah Huckabee Sanders has flown all over the country, accompanied by members of her family and the Arkansas State Police. For the ASP alone, the people of Arkansas paid $1.4 million in the last half of last year.

Last year, Sanders seemed to tire of the scrutiny being paid to her office and her spending. Sanders cited her family’s safety as she tried to shutter any attempts to see her travel records, taking the unusual step of calling a special session of the state Legislature to protect herself from the menace of transparency.

Notably, the governor had also recently been implicated in an Arkansas Freedom of Information Act case for these kinds of records.

The attempt to gut the law included a laundry list of carve-outs unrelated to safety, such as walking back the ability of public-records plaintiffs to recover attorney's fees when they win their case. Other attempts to scale back Arkansas' FOIA earlier in the year had not passed, and the state attorney general’s office was already working to study what improvements could be made to the law.  

Fortunately, the people of Arkansas came out to support the principle of government transparency, even as their governor decided she shouldn’t need to deal with it anymore. Over a tense few days, dozens of Arkansans lined up to testify in defense of the state FOIA and the value of holding elected officials, like Sanders, accountable to the people.

By the time the session wound down, the state Legislature had gone through multiple revisions. The sponsors walked back most of the extreme asks and added a requirement for the Arkansas State Police to provide quarterly reports on some of the governor’s travel costs. However, other details of that travel, like companions and the size of the security team, ultimately became exempt. Sanders managed to twist the whole fiasco into a win, though it would be a great surprise if the Legislature didn’t reconvene this year with some fresh attempts to take a bite out of FOIA.

While such a blatant attempt to bash public transparency is certainly a loser move, it clearly earns Sanders a win in the FOILIES—and the distinction of being one of the least transparent government officials this year.

The Doobie-ous Redaction Award: U.S. Department of Health and Human Services and Drug Enforcement Administration
A cannabis leaf covered with black bar redactions.

The feds heavily redacted an email about reclassifying cannabis from a Schedule I to a Schedule III substance.

Bloomberg reporters got a major scoop when they wrote about a Health and Human Services memo detailing how health officials were considering major changes to the federal restrictions on marijuana, recommending reclassifying it from a Schedule I substance to Schedule III.

Currently, the Schedule I classification for marijuana puts it in the same league as heroin and LSD, while Schedule III classification would indicate lower potential for harm and addiction along with valid medical applications.

Since Bloomberg viewed but didn’t publish the memo itself, reporters from the Cannabis Business Times filed a FOIA request to get the document into the public record. Their request was met with limited success: HHS provided a copy of the letter, but redacted virtually the entire document besides the salutation and contact information. When pressed further by CBT reporters, the DEA and HHS would only confirm what the redacted documents had already revealed—virtually nothing.

HHS handed over the full, 250-page review several months later, after a lawsuit was filed by an attorney in Texas. The crucial information the agencies had fought so hard to protect: “Based on my review of the evidence and the FDA’s recommendation, it is my recommendation as the Assistant Secretary for Health that marijuana should be placed in Schedule III of the CSA.”

The “Clearly Releasable,” Clearly Nonsense Award: U.S. Air Force

Increasingly, federal and state government agencies require public records requesters to submit their requests through online portals. It’s not uncommon for these portals to be quite lacking. For example, some portals fail to provide space to include information crucial to requests.

But the Air Force deserves special recognition for the changes it made to its submission portal, which asked requesters if they would  agree to limit their requests to  information that the Air Force deemed "clearly releasable.” You might think, “surely the Air Force defined this vague ‘clearly releasable’ information.” Alas, you’d be wrong: The form stated only that requesters would “agree to accept any information that will be withheld in compliance with the principles of FOIA exemptions as a full release.” In other words, the Air Force asked requesters to give up the fight over information before it even began, and to accept the Air Force's redactions and rejections as non-negotiable.

Following criticism, the Air Force jettisoned the update to its portal to undo these changes. Moving forward, it's "clear" that it should aim higher when it comes to transparency.

The Scrubbed Scrubs Award: Ontario Ministry of Health, Canada

Upon taking office in 2018, Ontario Premier Doug Ford was determined to shake up the Canadian province’s healthcare system. His administration has been a bit more tight-lipped, however, about the results of that invasive procedure. Under Ford, Ontario’s Ministry of Health is fighting the release of information on how understaffed the province’s medical system is, citing “economic and other interests.” The government’s own report, partially released to Global News, details high attrition as well as “chronic shortages” of nurses.

The reporters’ attempts to find out exactly how understaffed the system is, however, were met with black-bar redactions. The government claims that releasing the information would negatively impact “negotiating contracts with health-care workers.” However, the refusal to release the information hasn’t helped solve the problem; instead, it’s left the public in the dark about the extent of the issue and what it would actually cost to address it.

Global News has appealed the withholdings. That process has dragged on for over a year, but a decision is expected soon.

The Judicial Blindfold Award: Mississippi Justice Courts

Courts are usually transparent by default. People can walk in to watch hearings and trials, and can get access to court records online or at the court clerk’s office. And there are often court rules or state laws that ensure courts are public.

Apparently, the majority of Mississippi Justice Courts don’t feel like following those rules. An investigation by ProPublica and the Northeast Mississippi Daily Journal found that nearly two-thirds of these county-level courts obstructed public access to basic information about law enforcement’s execution of search warrants. This blockade not only appeared to violate state rules on court access; it frustrated the public’s ability to scrutinize when police officers raid someone’s home without knocking and announcing themselves.

The good news is that the Daily Journal is pushing back. It filed suit in the justice court in Union County, Miss., and asked for an end to the practice of never making search-warrant materials public.

Mississippi courts are unfortunately not alone in their efforts to keep search warrant records secret. The San Bernardino Superior Court of California sought to keep secret search warrants used to engage in invasive digital surveillance, only disclosing most of them after the EFF sued.

It’s My Party and I Can Hide Records If I Want to Award: Wyoming Department of Education

Does the public really have a right to know if their tax dollars pay for a private political event?

Former Superintendent of Public Instruction Brian Schroeder and Chief Communications Officer Linda Finnerty in the Wyoming Department of Education didn’t seem to think so, according to Laramie County Judge Steven Sharpe.

Sharpe, in his order requiring disclosure of the records, wrote that the two were more concerned with “covering the agency’s tracks” and acted in “bad faith” in complying with Wyoming’s state open records law.

The lawsuit proved that Schroeder originally used public money for a "Stop the Sexualization of Our Children" event and provided misleading statements to the plaintiffs about the source of funding for the private, pro-book-banning event.

The former superintendent had also failed to provide texts and emails sent via personal devices that were related to the planning of the event, ignoring the advice of the state’s attorneys. Instead, Schroeder decided to “shop around” for legal advice and listen to a friend, private attorney Drake Hill, who told him to not provide his cell phone for inspection.

Meanwhile, Finnerty and the Wyoming Department of Education “did not attempt to locate financial documents responsive to plaintiffs’ request, even though Finnerty knew or certainly should have known such records existed.”

Transparency won this round with the disclosure of more than 1,500 text messages and emails—and according to Sharpe, the incident established a legal precedent on Wyoming public records access.

The Fee-l the Burn Award: Baltimore Police Department

In 2020, Open Justice Baltimore sued the Baltimore Police Department over the agency's demand that the nonprofit watchdog group pay more than $1 million to obtain copies of use-of-force investigation files. 

The police department had decreased their assessment to $245,000 by the time of the lawsuit, but it rejected the nonprofit’s fee waiver, questioning the public interest in the records and where they would change the public's understanding of the issue. The agency also claimed that fulfilling the request would be costly and burdensome for its short-staffed police department.

In 2023, Maryland’s Supreme Court issued a sizzling decision criticizing the BPD’s $245,000 fee assessment and its refusal to waive that fee in the name of public interest. The Supreme Court found that the public interest in how the department polices itself was clear and that the department should have considered how a denial of the fee waiver would “exacerbate the public controversy” and further “the perception that BPD has something to hide.”

The Supreme Court called BPD’s fee assessment “arbitrary and capricious” and remanded the case back to the police department, which must now reconsider the fee waiver. The unanimous decision from the state’s highest court did not mince its words on the cost of public records, either: “While an official custodian’s discretion in these matters is broad,” the opinion reads, “it is not boundless.”

The Continuing Failure Award: United States Citizenship and Immigration Services

Alien registration files, also commonly known as “A-Files,” contain crucial information about a non-citizen’s interaction with immigration agencies, and are central to determining eligibility for immigration benefits.

However, U.S. immigration agencies have routinely failed to release alien files within the statutory time limit for responding, according to Nightingale et al v. U.S. Citizenship and Immigration Services et al, a class-action lawsuit by a group of immigration attorneys and individual requesters.

The attorneys filed suit in 2019 against the U.S. Citizenship and Immigration Services, the Department of Homeland Security and U.S. Immigration and Customs Enforcement. In 2020, Judge William H. Orrick ruled that the agencies must respond to FOIA requests within 20 business days, and provide the court and class counsel with quarterly compliance reports. The case remains open.

With U.S. immigration courts containing a backlog of more than 2 million cases as of October of last year, according to the U.S. Government Accountability Office, the path to citizenship is bogged down for many applicants. The failure of immigration agencies to comply with statutory deadlines for requests only makes navigating the immigration system even more challenging. There is reason for hope for applicants, however. In 2022, Attorney General Merrick Garland made it federal policy to not require FOIA requests for copies of immigration proceedings, instead encouraging agencies to make records more readily accessible through other means.

Even the A-File backlog itself is improving. In the last status report, filed by the Department of Justice, they wrote that “of the approximately 119,140 new A-File requests received in the current reporting period, approximately 82,582 were completed, and approximately 81,980 were timely completed.”

The Creative Invoicing Award: Richmond, Va., Police Department
A redacted document with an expensive price tag attached.

Some agencies claim outrageous fees for redacting documents to deter public access.

OpenOversightVA requested copies of general procedures—the basic outline of how police departments run—from localities across Virginia. While many departments either publicly posted them or provided them at no charge, Richmond Police responded with a $7,873.14 invoice. That’s $52.14 an hour to spend one hour on “review, and, if necessary, redaction” on each of the department’s 151 procedures.

This Foilies “winner” was chosen because of the wide gap between how available the information should be, and the staggering cost to bring it out of the file cabinet.

As MuckRock’s agency tracking shows, this is hardly an aberration for the agency. But this estimated invoice came not long after the department’s tear-gassing of protesters in 2020 cost the city almost $700,000. At a time when other departments are opening their most basic rulebooks (in California, for example, every law enforcement agency is required to post these policy manuals online), Richmond has been caught attempting to use a simple FOIA request as a cash cow.

The Foilies (Creative Commons Attribution License) were compiled by the Electronic Frontier Foundation (Director of Investigations Dave Maass, Senior Staff Attorney Aaron Mackey, Legal Fellow Brendan Gilligan, Investigative Researcher Beryl Lipton) and MuckRock (Co-Founder Michael Morisy, Data Reporter Dillon Bergin, Engagement Journalist Kelly Kauffman, and Contributor Tom Nash), with further review and editing by Shawn Musgrave. Illustrations are by EFF Designer Hannah Diaz. The Foilies are published in partnership with the Association of Alternative Newsmedia. 

Don’t Fall for the Latest Changes to the Dangerous Kids Online Safety Act 

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version this week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not like some lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

❌