Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review

As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.

All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.

We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.

Public Officials Censoring Comments on Government Social Media Pages

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED

The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.

The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court's new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.

Government Mandates for Platforms to Carry Certain Online Speech

Cases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED  

The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.

In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.

Government Coercion in Social Media Content Moderation

Case: Murthy v. Missouri – DECIDED

The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.

Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.

Some Takeaways From These Three Sets of Cases

The O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.

Three key takeaways emerged from these three sets of cases (across five total cases):

First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.

Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.

Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Government-Mandated Age Verification

Case: Free Speech Coalition v. Paxton – PENDING

Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF Tells the Second Circuit a Second Time That Electronic Device Searches at the Border Require a Warrant

EFF, along with ACLU and the New York Civil Liberties Union, filed a second amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Smith, involved a traveler who was stopped at Newark airport after returning from a trip to Jamaica. He was detained by border officers at the behest of the FBI and his cell phone was forensically searched. He had been under investigation for his involvement in a conspiracy to control the New York area emergency mitigation services (“EMS”) industry, which included (among other things) insurance fraud and extortion. He was subsequently prosecuted and sought to have the evidence from his cell phone thrown out of court.

As we wrote about last year, the district court made history in holding that border searches of cell phones require a warrant and therefore warrantless device searches at the border violate the Fourth Amendment. However, the judge allowed the evidence to be used in Mr. Smith’s prosecution because, the judge concluded, the officers had a “good faith” belief that they were legally permitted to search his phone without a warrant.

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.

The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here—and that the district court was correct in applying Riley. In that case, the Supreme Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet. As the Smith court stated, “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within the country.”

Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes, as was the case of the FBI investigating Mr. Smith’s involvement in the EMS conspiracy) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.

If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

In our brief, we also highlighted two other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Sultanov (2024) and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well. Earlier this month, we filed a brief in another Second Circuit border search case, U.S. v. Kamaldoss. We hope that the Second Circuit will rise to the occasion in one of these cases and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

EFF to Second Circuit: Electronic Device Searches at the Border Require a Warrant

EFF, along with ACLU and the New York Civil Liberties Union, filed an amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Kamaldoss, involves the criminal prosecution of a man whose cell phone and laptop were forensically searched after he landed at JFK airport in New York City. While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. In part based on evidence obtained during the forensic device searches, Mr. Kamaldoss was subsequently charged with prescription drug trafficking.

The district court upheld the forensic searches of his devices because the government had reasonable suspicion that the defendant “was engaged in efforts to illegally import scheduled drugs from abroad, an offense directly tied to at least one of the historic rationales for the border exception—the disruption of efforts to import contraband.”

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.

The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data. Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.

Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.

If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

In our brief, we also highlighted three other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Smith (2023), which we wrote about last year; U.S. v. Sultanov (2024), and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well, in the hope that the Second Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations

UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.

EFF legal intern Nick Delehanty was the principal author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.

At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.

Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.

Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.

In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.

We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:

  • First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
  • Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
  • Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
  • Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.

We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.

The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.

EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0

EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed.

Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles.

As Louis Barclay, the developer of Unfollow Everything 1.0, explained:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools.

In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case.

The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. 

But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use.

Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0.

Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.”

Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors.

We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.

EFF to Ninth Circuit: Don’t Shield Foreign Spyware Company from Human Rights Accountability in U.S. Court

Legal intern Danya Hajjaji was the lead author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Ninth Circuit supporting a group of journalists in their lawsuit against Israeli spyware company NSO Group. In our amicus brief backing the plaintiffs’ appeal, we argued that victims of human rights abuses enabled by powerful surveillance technologies must be able to seek redress through U.S. courts against both foreign and domestic corporations. 

NSO Group notoriously manufactures “Pegasus” spyware, which enables full remote control of a target’s smartphone. Pegasus attacks are stealthy and sophisticated: the spyware embeds itself into phones without an owner having to click anything (such as an email or text message). A Pegasus-infected phone allows government operatives to intercept personal data on a device as well as cloud-based data connected to the device.

Our brief highlights multiple examples of Pegasus spyware having been used by governmental bodies around the world to spy on targets such as journalists, human rights defenders, dissidents, and their families. For example, the Saudi Arabian government was found to have deployed Pegasus against Washington Post columnist Jamal Khashoggi, who was murdered at the Saudi consulate in Istanbul, Turkey.

In the present case, Dada v. NSO Group, the plaintiffs are affiliated with El Faro, a prominent independent news outlet based in El Salvador, and were targeted with Pegasus through their iPhones. The attacks on El Faro journalists coincided with their investigative reporting into the Salvadorian government.

The plaintiffs sued NSO Group in California because NSO Group, in deploying Pegasus against iPhones, abused the services of Apple, a California-based company. However, the district court dismissed the case on a forum non conveniens theory, holding that California is an inconvenient forum for NSO Group. The court thus concluded that exercising jurisdiction over the foreign corporation was inappropriate and that the case would be better considered by a court in Israel or elsewhere.

However, as we argued in our brief, NSO Group is already defending two other lawsuits in California brought by both Apple and WhatsApp. And the company is unlikely to face legal accountability in its home country—the Israeli Ministry of Defense provides an export license to NSO Group, and its technology has been used against citizens within Israel.

That's why this case is critical—victims of powerful, increasingly-common surveillance technologies like Pegasus spyware must not be barred from U.S. courts.

As we explained in our brief, the private spyware industry is a lucrative industry worth an estimated $12 billion, largely bankrolled by repressive governments. These parties widely fail to comport with the United Nations’ Guiding Principles on Business and Human Rights, which caution against creating a situation where victims of human rights abuses “face a denial of justice in a host State and cannot access home State courts regardless of the merits of the claim.”

The U.S. government has endorsed the Guiding Principles as applied to U.S. companies selling surveillance technologies to foreign governments, but also sought to address the issue of spyware facilitating state-sponsored human rights violations. In 2021, for example, the Biden Administration recognized NSO Group as engaging in such practices by placing it on a list of entities prohibited from receiving U.S. exports of hardware or software.

Unfortunately, the Guiding Principles expressly avoid creating any “new international law obligations,” thus leaving accountability to either domestic law or voluntary mechanisms.

Yet voluntary enforcement mechanisms are wholly inadequate for human rights accountability. The weakness of voluntary enforcement is best illustrated by NSO Group supposedly implementing its own human rights policies, all the while acting as a facilitator of human rights abuses.

Restraining the use of the forum non conveniens doctrine and opening courthouse doors to victims of human rights violations wrought by surveillance technologies would bind companies like NSO Group through judicial liability.

But this would not mean that U.S. courts have unfettered discretion over foreign corporations. The reach of courts is limited by rules of personal jurisdiction and plaintiffs must still prove the specific required elements of their legal claims.

The Ninth Circuit must give the El Faro plaintiffs the chance to vindicate their rights in federal court. Shielding spyware companies like NSO Group from legal accountability does not only diminish digital civil liberties like privacy and freedom of speech—it paves the way for the worst of the worst human rights abuses, including physical apprehensions, unlawful detentions, torture, and even summary executions by the governments that use the spyware.

Victory! D.C. Circuit Rules in Favor of Animal Rights Activists Censored on Government Social Media Pages

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case, joined by the Foundation for Individual Rights and Expression (FIRE).

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH) in 2021, arguing that the agency unconstitutionally blocked their comments opposing animal testing in scientific research on the agency’s Facebook and Instagram pages. (NIH provides funding for research that involves testing on animals.)

NIH argued it was simply implementing reasonable content guidelines that included a prohibition against public comments that are “off topic” to the agency’s social media posts. Yet the agency implemented the “off topic” rule by employing keyword filters that included words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop to block PETA activists from posting comments that included these words.

NIH’s Social Media Pages Are Limited Public Forums

The D.C. Circuit first had to determine whether the comment sections of NIH’s social media pages are designated public forums or limited public forums. As the court explained, “comment threads of government social media pages are designated public forums when the pages are open for comment without restrictions and limited public forums when the government prospectively sets restrictions.”

The court concluded that the comment sections of NIH’s Facebook and Instagram pages are limited public forums: “because NIH attempted to remove a range of speech violating its policies … we find sufficient evidence that the government intended to limit the forum to only speech that meets its public guidelines.”

The nature of the government forum determines what First Amendment standard courts apply in evaluating the constitutionality of a speech restriction. Speech restrictions that define limited public forums must only be reasonable in light of the purposes of the forum, while speech restrictions in designated public forums must satisfy more demanding standards. In both forums, however, viewpoint discrimination is prohibited.

NIH’s Social Media Censorship Violated Animal Rights Activists’ First Amendment Rights

After holding that the comment sections of NIH’s Facebook and Instagram pages are limited public forums subject to a lower standard of reasonableness, the D.C. Circuit then nevertheless held that NIH’s “off topic” rule as implemented by keyword filters is unreasonable and thus violates the First Amendment.

The court explained that because the purpose of the forums (the comment sections of NIH’s social media pages) is directly related to speech, “reasonableness in this context is thus necessarily a more demanding test than in forums that have a primary purpose that is less compatible with expressive activity, like the football stadium.”

In rightly holding that NIH’s censorship was unreasonable, the court adopted several of the arguments we made in our amicus brief, in which we assumed that NIH’s social media pages are limited public forums but argued that the agency’s implementation of its “off topic” rule was unreasonable and thus unconstitutional.

Keyword Filters Can’t Discern Context

We argued, for example, that keyword filters are an “unreasonable form of automated content moderation because they are imprecise and preclude the necessary consideration of context and nuance.”

Similarly, the D.C. Circuit stated, “NIH’s off-topic policy, as implemented by the keywords, is further unreasonable because it is inflexible and unresponsive to context … The permanent and context-insensitive nature of NIH’s speech restriction reinforces its unreasonableness.”

Keyword Filters Are Overinclusive

We also argued, related to context, that keyword filters are unreasonable “because they are blunt tools that are overinclusive, censoring more speech than the ‘off topic’ rule was intended to block … NIH’s keyword filters assume that words related to animal testing will never be used in an on-topic comment to a particular NIH post. But this is false. Animal testing is certainly relevant to NIH’s work.”

The court acknowledged this, stating, “To say that comments related to animal testing are categorically off-topic when a significant portion of NIH’s posts are about research conducted on animals defies common sense.”

NIH’s Keyword Filters Reflect Viewpoint Discrimination

We also argued that NIH’s implementation of its “off topic” rule through keyword filters was unreasonable because those filters reflected a clear intent to censor speech critical of the government, that is, speech reflecting a viewpoint that the government did not like.

The court recognized this, stating, “NIH’s off-topic restriction is further compromised by the fact that NIH chose to moderate its comment threads in a way that skews sharply against the appellants’ viewpoint that the agency should stop funding animal testing by filtering terms such as ‘torture’ and ‘cruel,’ not to mention terms previously included such as ‘PETA’ and ‘#stopanimaltesting.’”

On this point, we further argued that “courts should consider the actual vocabulary or terminology used … Certain terminology may be used by those on only one side of the debate … Those in favor of animal testing in scientific research, for example, do not typically use words like cruelty, revolting, tormenting, torture, hurt, kill, and stop.”

Additionally, we argued that “a highly regulated social media comments section that censors Plaintiffs’ comments against animal testing gives the false impression that no member of the public disagrees with the agency on this issue.”

The court acknowledged both points, stating, “The right to ‘praise or criticize governmental agents’ lies at the heart of the First Amendment’s protections … and censoring speech that contains words more likely to be used by animal rights advocates has the potential to distort public discourse over NIH’s work.”

We are pleased that the D.C. Circuit took many of our arguments to heart in upholding the First Amendment rights of social media users in this important internet free speech case.

EFF to Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

Legal intern Danya Hajjaji was the lead author of this post.

The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment rights, EFF and the Knight First Amendment Institute at Columbia University said in an amicus brief.

The Sixth Circuit is set to re-decide Lindke v. Freed, a case that was recently remanded from the Supreme Court. The lawsuit arose after Port Huron, Michigan resident Kevin Lindke left critical comments on City Manager James Freed's Facebook page. Freed retaliated by blocking Lindke from being able to view, much less continue to leave critical comments on, Freed’s public profile. The dispute turned on the nature of Freed’s Facebook account, where updates on his government engagements were interwoven with personal posts.

Public officials who use social media as an extension of their office engage in “state action,” which refers to acting on the government’s behalf. They are bound by the First Amendment and generally cannot engage in censorship, especially viewpoint discrimination, by deleting comments or blocking citizens who criticize them. While social media platforms are private corporate entities, government officials who operate interactive online forums to engage in public discussions and share information are bound by the First Amendment.

The Sixth Circuit initially ruled in Freed’s favor, holding that no state action exists due to the prevalence of personal posts on his Facebook page and the lack of government resources, such as staff members or taxpayer dollars, used to operate it.  

The case then went to the U.S. Supreme Court, where EFF and the Knight Institute filed a brief urging the Court to establish a functional test that finds state action when a government official uses a social media account in furtherance of their public duties, even if the account is also sometimes used for personal purposes.

The U.S. Supreme Court crafted a new two-pronged state action test: a government official’s social media activity is state action if 1) the official “possessed actual authority to speak” on the government’s behalf and 2) “purported to exercise that authority” when speaking on social media. As we wrote when the decision came out, this state action test does not go far enough in protecting internet users who intereact with public officials online. Nevertheless, the Court has finally provided further guidance on this issue as a result.

Now that the case is back in the Sixth Circuit, EFF and the Knight Institute filed a second brief endorsing a broad construction of the Supreme Court’s state action test.

The brief argues that the test’s “authority” prong requires no more than a showing, either through written law or unwritten custom, that the official had the authority to speak on behalf of the government generally, irrespective of the medium of communication—whether an in-person press conference or social media. It need not be the authority to post on social media in particular.

For high-ranking elected officials (such as presidents, governors, mayors, and legislators) courts should not have a problem finding that they have clear and broad authority to speak on government policies and activities. The same is true for heads of government agencies who are also generally empowered to speak on matters broadly relevant to those agencies. For lower-ranking officials, courts should consider the areas of their expertise and whether their social media posts in question were related to subjects within, as the Supreme Court said, their “bailiwick.”

The brief also argues that the test’s “exercise” prong requires courts to engage in, in the words of the Supreme Court, a “fact-specific undertaking” to determine whether the official was speaking on social media in furtherance of their government duties.

This element is easily met where the social media account is owned, created, or operated by the office or agency itself, rather than the official—for example, the Federal Trade Commission’s @FTC account on X (formerly Twitter).

But when an account is owned by the person and is sometimes used for non-governmental purposes, courts must look to the content of the posts. These include those posts from which the plaintiff’s comments were deleted, or any posts the plaintiff would have wished to see or comment on had the official not blocked them entirely. Former President Donald Trump is a salient example, having routinely used his legacy @realDonaldTrump X account, rather than the government-created and operated account @POTUS, to speak in furtherance of his official duties while president.

However, it is often not easy to differentiate between personal and official speech by looking solely at the posts themselves. For example, a social media post could be either private speech reflecting personal political passions, or it could be speech in furtherance of an official’s duties, or both. If this is the case, courts must consider additional factors when assessing posts made to a mixed-use account. These factors can be an account’s appearance, such as whether government logos were used; whether government resources such as staff or taxpayer funds were used to operate the social media account; and the presence of any clear disclaimers as to the purpose of the account.

EFF and the Knight Institute also encouraged the Sixth Circuit to consider the crucial role social media plays in facilitating public participation in the political process and accountability of government officials and institutions. If the Supreme Court’s test is construed too narrowly, public officials will further circumvent their constitutional obligations by blocking critics or removing any trace of disagreement from any social media accounts that are used to support and perform their official duties.

Social media has given rise to active democratic engagement, while government officials at every level have leveraged this to reach their communities, discuss policy issues, and make important government announcements. Excessively restricting any member of the public’s viewpoints threatens public discourse in spaces government officials have themselves opened as public political forums.

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.  

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

  • the official “possessed actual authority to speak” on the government’s behalf, and
  • the official “purported to exercise that authority when he spoke on social media.”

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”  

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media?

Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”  

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media

EFF legal intern Jack Beck contributed to this post.

A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users.

At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke.

The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals.

Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent.

In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content.

We made two arguments in our brief supporting Snap’s appeal.

First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances.

Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down.

There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter.

If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

The U.S. Supreme Court’s Busy Year of Free Speech and Tech Cases: 2023 Year in Review

The U.S. Supreme Court has taken an unusually active interest in internet free speech issues. EFF participated as amicus in a whopping nine cases before the court this year. The court decided four of those cases, and decisions in the remaining five cases will be published in 2024.   

Of the four cases decided this year, the results are a mixed bag. The court showed restraint and respect for free speech rights when considering whether social media platforms should be liable for ISIS content, while also avoiding gutting one of the key laws supporting free speech online. The court also heightened protections for speech that may rise to the level of criminal “true threats.” But the court declined to overturn an overbroad law that relates to speech about immigration.  

Next year, we’re hopeful that the court will uphold the right of individuals to comment on government officials’ social media pages, when those pages are largely used for governmental purposes and even when the officials don’t like what those comments say; and that the court will strike down government overreach in mandating what content must stay up or come down online, or otherwise distorting social media editorial decisions. 

Platform Liability for Violent Extremist Content 

Cases: Gonzalez v. Google and Twitter v. Taamneh – DECIDED 

The court, in two similar cases, declined to hold social media companies—YouTube and Twitter—responsible for aiding and abetting terrorist violence allegedly caused by user-generated content posted to the platforms. The case against YouTube (Google) was particularly concerning because the plaintiffs had asked the court to narrow the scope of Section 230 when internet intermediaries recommend third-party content. As we’ve said for decades, Section 230 is one of the most important laws for protecting internet users’ speech. We argued in our brief that narrowing Section 230, the law that generally protects users and online services from lawsuits based on content created by others, in any way would lead to increased censorship and a degraded online experience for users; as would holding platforms responsible for aiding and abetting acts of terrorism. Thankfully, the court declined to address the scope of Section 230 and held that the online platforms may not generally be held liable under the Anti-Terrorism Act. 

True Threats Online 

Case: Counterman v. Colorado – DECIDED 

The court considered what state of mind a speaker must have to lose First Amendment protection and be liable for uttering “true threats,” in a case involving Facebook messages that led to the defendant’s conviction. The issue before the court was whether any time the government seeks to prosecute someone for threatening violence against another person, it must prove that the speaker had some subjective intent to threaten the victim, or whether the government need only prove, objectively, that a reasonable person would have known that their speech would be perceived as a threat. We urged the court to require some level of subjective intent to threaten before an individual’s speech can be considered a "true threat" not protected by the First Amendment. In our highly digitized society, online speech like posts, messages, and emails, can be taken out of context, repackaged in ways that distort or completely lose their meaning, and spread far beyond the intended recipients. This higher standard is thus needed to protect speech such as humor, art, misunderstandings, satire, and misrepresentations. The court largely agreed and held that subjective understanding by the defendant is required: that, at minimum, the speaker was in fact subjectively aware of the serious risk that the recipient of the statements would regard their speech as a threat, but recklessly made them anyway.  

Encouraging Illegal Immigration  

Case: U.S. v. Hansen - DECIDED  

The court upheld the Encouragement Provision that makes it a federal crime to “encourage or induce” an undocumented immigrant to “reside” in the United States, if one knows that such “coming to, entry, or residence” in the U.S. will be in violation of the law. We urged the court to uphold the Ninth Circuit’s ruling, which found that the language is unconstitutionally overbroad under the First Amendment because it threatens an enormous amount of protected online speech. This includes prohibiting, for example, encouraging an undocumented immigrant to take shelter during a natural disaster, advising an undocumented immigrant about available social services, or even providing noncitizens with Know Your Rights resources or certain other forms of legal advice. Although the court declined to hold the law unconstitutional, it sharply narrowed the law’s impact on free speech, ruling that the Encouragement Provision applies only to the intentional solicitation or facilitation of immigration law violations. 

Public Officials Censoring Social Media Comments 

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – PENDING 

The court is considering a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The First Amendment generally prohibits viewpoint-based discrimination in government forums open to speech by members of the public. The threshold question in these cases is what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions. We argued that the court should establish a functional test that looks at how an account is actually used. It is important that the court make clear once and for all that public officials using social media in furtherance of their official duties can’t sidestep their First Amendment obligations because they’re using nominally “personal” or preexisting campaign accounts. 

Government Mandates for Platforms to Carry Certain Online Speech 

Cases: NetChoice v. Paxton and Moody v. NetChoice - PENDING 

The court will hear arguments this spring about whether laws in Florida and Texas violate the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. Although the state laws differ in how they operate and the type of mandates they impose, each law represents a profound intrusion into social media sites’ ability to decide for themselves what speech they will publish and how they will present it to users. As we argued in urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs. To be sure, internet users are rightly frustrated with social media services’ content moderation practices, which are often perplexing and mistaken. But permitting Florida and Texas to deploy the state’s coercive power in retaliation for those concerns raises significant First Amendment and human rights concerns. 

Government Coercion in Content Moderation 

Case: Murthy v. Missouri – PENDING 

Last, but certainly not least, the court is considering the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech. But the court has not previously applied this principle to government communications with social media sites about user posts. We urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible. We also urged the court to make clear that courts reviewing claims of impermissible government involvement in content moderation are obligated to conduct fact and context-specific inquires. And we argued that close cases should go against the government, as it is the best positioned to ensure that its involvement in platforms’ policy enforcement decisions remains permissible. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Protecting Kids on Social Media Act: Amended and Still Problematic

Senators who believe that children and teens must be shielded from social media have updated the problematic Protecting Kids on Social Media Act, though it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition.  

As we wrote in August, the original bill (S. 1291) contained a host of problems. A recent draft of the amended bill gets rid of some of the most flagrantly unconstitutional provisions: It no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media. 

However, the amended bill is still rife with issues.   

The biggest is that it prohibits children under 13 from using any ad-based social media. Though many social media platforms do require users to be over 13 to join (primarily to avoid liability under COPPA), some platforms designed for young people do not.  Most platforms designed for young people are not ad-based, but there is no reason that young people should be barred entirely from a thoughtful, cautious platform that is designed for children, but which also relies on contextual ads. Were this bill made law, ad-based platforms may switch to a fee-based model, limiting access only to young people who can afford the fee. Banning children under 13 from having social media accounts is a massive overreach that takes authority away from parents and infringes on the First Amendment rights of minors.  

The vast majority of content on social media is lawful speech fully protected by the First Amendment. Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media. At the same time, parents have a right to oversee their children’s online activities. But the First Amendment forbids Congress from making a freewheeling determination that children can be blocked from accessing lawful speech. The Supreme Court has ruled that there is no children’s exception to the First Amendment.   

Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media.

Perhaps recognizing this, the amended bill includes a caveat that children may still view publicly available social media content that is not behind a login, or through someone else’s account (for example, a parent’s account). But this does not help the bill. Because the caveat is essentially a giant loophole that will allow children to evade the bill’s prohibition, it raises legitimate questions about whether the sponsors are serious about trying to address the purported harms they believe exist anytime minors access social media. As the Supreme Court wrote in striking down a California law aimed at restricting minors’ access to violent video games, a law that is so “wildly underinclusive … raises serious doubts about whether the government is in fact pursuing the interest it invokes….” If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

Another problem: The amended bill employs a new standard for determining whether platforms know the age of users: “[a] social media platform shall not permit an individual to create or maintain an account if it has actual knowledge or knowledge fairly implied on the basis of objective circumstances that the individual is a child [under 13].” As explained below, this may still force online platforms to engage in some form of age verification for all their users. 

While this standard comes from FTC regulatory authority, the amended bill attempts to define it for the social media context. The amended bill directs courts, when determining whether a social media company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor, to consider “competent and reliable empirical evidence, taking into account the totality of the circumstances, including whether the operator, using available technology, exercised reasonable care.” But, according to the amended bill, “reasonable care” is not meant to mandate “age gating or age verification,” the collection of “any personal data with respect to the age of users that the operator is not already collecting in the normal course of business,” the viewing of “users’ private messages” or the breaking of encryption. 

While these exclusions provide superficial comfort, the reality is that companies will take the path of least resistance and will be incentivized to implement age gating and/or age verification, which we’ve raised concerns about many times over. This bait-and-switch tactic is not new in bills that aim to protect young people online. Legislators, aware that age verification requirements will likely be struck down, are explicit that the bills do not require age verification. Then, they write a requirement that would lead most companies to implement age verification or else face liability.  

If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

In practice, it’s not clear how a court is expected to determine whether a company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor in the event of an enforcement action. In this case, while the lack of age gating/age verification mechanisms may not be proof that a company failed to exercise reasonable care in letting a child under 13 use the site,; the use of age gating/age verification tools to deny children under 13 the ability to use a social media site will surely be an acceptable way to avoid liability. Moreover, without more guidance, this standard of “reasonable care” is quite vague, which poses additional First Amendment and due process problems. 

Finally, although the bill no longer creates a digital ID pilot program for age verification, it still tries to push the issue forward. The amended bill orders a study and report looking at “current available technology and technologically feasible methods and options for developing and deploying systems to provide secure digital identification credentials; and systems to verify age at the device and operating system level.” But any consideration of digital identification for age verification is dangerous, given the risk of sliding down the slippery slope toward a national ID that is used for many more things than age verification and that threatens individual privacy and civil liberties. 

EFF to Ninth Circuit: Activists’ Personal Information Unconstitutionally Collected by DHS Must Be Expunged

EFF filed an amicus brief in the U.S. Court of Appeals for the Ninth Circuit in a case that has serious implications for people’s First Amendment rights to engage in cross-border journalism and advocacy.

In 2019, the local San Diego affiliate for NBC News broke a shocking story: components of the federal government were conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico.

The Inspector General for the Department of Homeland Security, the agency’s watchdog, later reported that the U.S. government shared sensitive information with the Mexican government, and U.S. officials had improperly asked Mexican officials to deny entry into Mexico to Americans to prevent them from doing their jobs.

The ACLU of Southern California, representing three of these individuals, sued Customs & Border Protection (CBP), Immigration & Customs Enforcement (ICE), and the FBI, in a case called Phillips v. CBP. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases.

Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs.

The panel held that the plaintiffs don’t have standing to bring the lawsuit because they don’t have sufficient privacy interests in the personal information the government collected about them, in part because the data was gleaned from public sources such as social media. The panel also held there is no standing because there isn’t a sufficient risk of future harm from the government’s retention of the information.

The plaintiffs recently asked the three-judge panel to reconsider its decision, or alternatively, for the full Ninth Circuit to conduct an en banc review of the panel’s decision. 

In our amicus brief, we argued that the plaintiffs have privacy interests in the personal information the government collected about them, which included details about their First Amendment-protected “political views and associations.” We cited to Supreme Court precedent that has found privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology.

We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm. These risks include sharing data across agencies or even with other governments due to lax or nonexistent policies on data sharing; government employees abusing individuals’ data; and CBP’s poor track record of keeping digital data safe from data breaches.

We hope that the panel reconsiders its erroneous decision and holds that the plaintiffs have standing to seek expungement of the information the government collected about them; or that the full Ninth Circuit agrees to review the panel’s original decision, to protect Americans’ free speech and privacy rights.

EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

Intern Muhammad Essa contributed to this post.

EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a government agency’s social media pages. The district court’s decision is problematic because it undermines our right to freely express opinions on issues of public importance using a modern and accessible way to communicate with government representatives.

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH), arguing that NIH blocks their comments against animal testing in scientific research on the agency’s Facebook and Instagram pages, thus violating of the First Amendment. NIH provides funding for research that involves testing on animals from rodents to primates.

NIH claims to apply a general rule prohibiting public comments that are “off topic” to the agency’s social media posts—yet the agency implements this rule by employing keyword filters that include words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop. These words are commonly found in comments that express a viewpoint that is against animal testing and sympathetic to animal rights.

First Amendment law makes it clear that when a government agency opens a forum for public participation, such as the interactive spaces of the agency’s social media pages, it is prohibited from censoring a particular viewpoint in that forum. Any speech restrictions that it may apply must be viewpoint-neutral, meaning that the restrictions should apply equally to all viewpoints related to a topic, not just to the viewpoint that the agency disagrees with.

EFF’s brief argues that courts must approach with scepticism a government agency’s claim that its “off topic” speech restriction is viewpoint-neutral and is only intended to exclude irrelevant comments. How such a rule is implemented could reveal that it is in fact a guise for unconstitutional viewpoint discrimination. This is the case here and the district court erred in ruling for the government.

For example, EFF’s brief argues that NIH’s automated keyword filters are imprecise—they are incapable of accurately implementing an “off topic” rule because they are incapable of understanding context and nuance, which is necessary when comparing a comment to a post. Also, NIH’s keyword filters and the agency’s manual enforcement of the “off topic” rule are highly underinclusive—that is, other people's comments that are “off topic” to a post are often allowed to remain on the agency’s social media pages. Yet PETA’s comments against animal testing are reliably censored.

Imprecise and underinclusive enforcement of the “off topic” rule suggests that NIH’s rule is not viewpoint-neutral but is really a means to block PETA activists from engaging with the agency online.

EFF’s brief urges the D.C. Circuit to reject the district court’s erroneous holding and rule in favor of the plaintiffs. This would protect everyone’s right to express their opinions freely online. The free exchange of opinions informs public policy and is a crucial characteristic of a democratic society. A genuine representative government must not be afraid of public criticism.

❌