Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. 

Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself.

The DSA Must Not Become A Tool For Global Censorship

The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. 

The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. 

Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk.

People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. 

Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. 

Lastly, we are concerned that the Commission is reaching beyond its geographic mandate.  It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. 

Read the full letter here.

In These Five Social Media Speech Cases, Supreme Court Set Foundational Rules for the Future

Par : David Greene
14 août 2024 à 15:25

The U.S. Supreme Court addressed government’s various roles with respect to speech on social media in five cases reviewed in its recently completed term. The through-line of these cases is a critically important principle that sets limits on government’s ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users’ First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation, but will not be infringed by the independent decisions of the platforms themselves.

As a general overview, the NetChoice cases, Moody v. NetChoice and NetChoice v. Paxton, looked at government’s role as a regulator of social media platforms. The issue was whether state laws in Texas and Florida that prevented certain online services from moderating content were constitutional in most of their possible applications. The Supreme Court did not rule on that question and instead sent the cases back to the lower courts to reexamine NetChoice’s claim that the statutes had few possible constitutional applications.

The court did, importantly and correctly, explain that at least Facebook’s Newsfeed and YouTube’s Homepage were examples of platforms exercising their own First Amendment rights on how to display and organize content, and the laws could not be constitutionally applied to Newsfeed and Homepage and similar sites, a preliminary step in determining whether the laws were facially unconstitutional.

Lindke v. Freed and Garnier v. O’Connor-Ratcliffe looked at the government’s role as a social media user who has an account and wants to use its full features, including blocking other users and deleting comments. The Supreme Court instructed the lower courts to first look to whether a government official has the authority to speak on behalf of the government, before looking at whether the official used their social media page for governmental purposes, conduct that would trigger First Amendment protections for the commenters.

Murthy v. Missouri, the jawboning case, looked at the government’s mixed role as a regulator and user, in which the government may be seeking to coerce platforms to engage in unconstitutional censorship or may also be a user simply flagging objectionable posts as any user might. The Supreme Court found that none of the plaintiffs had standing to bring the claims because they could not show that their harms were traceable to any action by the federal government defendants.

We’ve analyzed each of the Supreme Court decisions, Moody v. NetChoice (decided with NetChoice v. Paxton), Murthy v. Missouri, and Lindke v. Freed (decided with Garnier v. O’Connor Ratcliffe), in depth.

But some common themes emerge when all five cases are considered together.

  • Internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to  interfere with content moderation, but it will not be infringed  by the independent decisions of the platforms themselves. This principle, which EFF has been advocating for many years, is evident in each of the rulings. In Lindke, the Supreme Court recognized that government officials, if vested with and exercising official authority, could violate the First Amendment by deleting a user’s comments or blocking them from commenting altogether. In Murthy, the Supreme Court found that users could not sue the government for violating their First Amendment rights unless they could show that government coercion lead to their content being taken down or obscured, rather than the social media platform’s own editorial decision. And in the NetChoice cases, the Supreme Court explained that social media platforms typically exercise their own protected First Amendment rights when they edit and curate which posts they show to their users, and the government may violate the First Amendment when it requires them to publish or amplify posts.

  • Underlying these rulings is the Supreme Court’s long-awaited recognition that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and are often guided in this process by their own community standards or similar editorial policies. This is seen in the Supreme Court’s emphasis in Murthy that jawboning is not actionable if the content moderation was the independent decision of the platform rather than coerced by the government. And a similar recognition of independent decision-making underlies the Supreme Court’s First Amendment analysis in the NetChoice cases. The Supreme Court has now thankfully moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Supreme Court used that language to describe the process in last term’s case, Twitter v. Taamneh.

  • This terms cases also confirm that traditional First Amendment rules apply to social media. In Lindke, the Supreme Court recognized that when government controls the comments components of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. In the NetChoice cases, the Supreme Court found that platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Plenty of legal issues around social media remain to be decided. But the 2023-24 Supreme Court term has set out important speech-protective rules that will serve as the foundation for many future rulings. 

 

Victory! D.C. Circuit Rules in Favor of Animal Rights Activists Censored on Government Social Media Pages

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case, joined by the Foundation for Individual Rights and Expression (FIRE).

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH) in 2021, arguing that the agency unconstitutionally blocked their comments opposing animal testing in scientific research on the agency’s Facebook and Instagram pages. (NIH provides funding for research that involves testing on animals.)

NIH argued it was simply implementing reasonable content guidelines that included a prohibition against public comments that are “off topic” to the agency’s social media posts. Yet the agency implemented the “off topic” rule by employing keyword filters that included words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop to block PETA activists from posting comments that included these words.

NIH’s Social Media Pages Are Limited Public Forums

The D.C. Circuit first had to determine whether the comment sections of NIH’s social media pages are designated public forums or limited public forums. As the court explained, “comment threads of government social media pages are designated public forums when the pages are open for comment without restrictions and limited public forums when the government prospectively sets restrictions.”

The court concluded that the comment sections of NIH’s Facebook and Instagram pages are limited public forums: “because NIH attempted to remove a range of speech violating its policies … we find sufficient evidence that the government intended to limit the forum to only speech that meets its public guidelines.”

The nature of the government forum determines what First Amendment standard courts apply in evaluating the constitutionality of a speech restriction. Speech restrictions that define limited public forums must only be reasonable in light of the purposes of the forum, while speech restrictions in designated public forums must satisfy more demanding standards. In both forums, however, viewpoint discrimination is prohibited.

NIH’s Social Media Censorship Violated Animal Rights Activists’ First Amendment Rights

After holding that the comment sections of NIH’s Facebook and Instagram pages are limited public forums subject to a lower standard of reasonableness, the D.C. Circuit then nevertheless held that NIH’s “off topic” rule as implemented by keyword filters is unreasonable and thus violates the First Amendment.

The court explained that because the purpose of the forums (the comment sections of NIH’s social media pages) is directly related to speech, “reasonableness in this context is thus necessarily a more demanding test than in forums that have a primary purpose that is less compatible with expressive activity, like the football stadium.”

In rightly holding that NIH’s censorship was unreasonable, the court adopted several of the arguments we made in our amicus brief, in which we assumed that NIH’s social media pages are limited public forums but argued that the agency’s implementation of its “off topic” rule was unreasonable and thus unconstitutional.

Keyword Filters Can’t Discern Context

We argued, for example, that keyword filters are an “unreasonable form of automated content moderation because they are imprecise and preclude the necessary consideration of context and nuance.”

Similarly, the D.C. Circuit stated, “NIH’s off-topic policy, as implemented by the keywords, is further unreasonable because it is inflexible and unresponsive to context … The permanent and context-insensitive nature of NIH’s speech restriction reinforces its unreasonableness.”

Keyword Filters Are Overinclusive

We also argued, related to context, that keyword filters are unreasonable “because they are blunt tools that are overinclusive, censoring more speech than the ‘off topic’ rule was intended to block … NIH’s keyword filters assume that words related to animal testing will never be used in an on-topic comment to a particular NIH post. But this is false. Animal testing is certainly relevant to NIH’s work.”

The court acknowledged this, stating, “To say that comments related to animal testing are categorically off-topic when a significant portion of NIH’s posts are about research conducted on animals defies common sense.”

NIH’s Keyword Filters Reflect Viewpoint Discrimination

We also argued that NIH’s implementation of its “off topic” rule through keyword filters was unreasonable because those filters reflected a clear intent to censor speech critical of the government, that is, speech reflecting a viewpoint that the government did not like.

The court recognized this, stating, “NIH’s off-topic restriction is further compromised by the fact that NIH chose to moderate its comment threads in a way that skews sharply against the appellants’ viewpoint that the agency should stop funding animal testing by filtering terms such as ‘torture’ and ‘cruel,’ not to mention terms previously included such as ‘PETA’ and ‘#stopanimaltesting.’”

On this point, we further argued that “courts should consider the actual vocabulary or terminology used … Certain terminology may be used by those on only one side of the debate … Those in favor of animal testing in scientific research, for example, do not typically use words like cruelty, revolting, tormenting, torture, hurt, kill, and stop.”

Additionally, we argued that “a highly regulated social media comments section that censors Plaintiffs’ comments against animal testing gives the false impression that no member of the public disagrees with the agency on this issue.”

The court acknowledged both points, stating, “The right to ‘praise or criticize governmental agents’ lies at the heart of the First Amendment’s protections … and censoring speech that contains words more likely to be used by animal rights advocates has the potential to distort public discourse over NIH’s work.”

We are pleased that the D.C. Circuit took many of our arguments to heart in upholding the First Amendment rights of social media users in this important internet free speech case.

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

Par : David Greene
22 juillet 2024 à 15:34

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue. 

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo 

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

Platforms Have First Amendment Right to Curate Speech, As We’ve Long Argued, Supreme Court Said, But Sends Laws Back to Lower Court To Decide If That Applies To Other Functions Like Messaging

Par : David Greene
13 juillet 2024 à 23:07

Social media platforms, at least in their most common form, have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited, the U.S. Supreme Court stated in its landmark decision in Moody v. NetChoice and NetChoice v. Paxton, which were decided together. 

The cases dealt with Florida and Texas laws that each limited the ability of online services to block, deamplify, or otherwise negatively moderate certain user speech.  

Yet the Supreme Court did not strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. 

The Supreme Court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged. 

This is an important ruling and one that EFF has been arguing for in courts since 2018. We’ve already published our high-level reaction to the decision and written about how it bears on pending social media regulations. This post is a more thorough, and much longer, analysis of the opinion and its implications for future lawsuits. 

A First Amendment Right to Moderate Social Media Content 

 The most important question before the Supreme Court, and the one that will have the strongest ramifications beyond the specific laws being challenged here, is whether social media platforms have their own First Amendment rights, independent of their users’ rights, to decide what third-party content to present in their users’ feeds, recommend, amplify, deamplify, label, or block.  The lower courts in the NetChoice cases reached opposite conclusions, with the 11th Circuit considering the Florida law finding a First Amendment right to curate, and the 5th Circuit considering the Texas law refusing to do so. 

The Supreme Court appropriately resolved that conflict between the two appellate courts and answered this question yes, treating social media platforms the same as other entities that compile, edit, and curate the speech of others, such as bookstores, newsstands, art galleries, parade organizers, and newspapers.  As Justice Kagan, writing for the court’s majority, wrote, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”   

As the Supreme Court explained,  

Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included—it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party’s expressive choices—the government confronts the First Amendment. 

The court thus chose to apply the line of precedent from  Miami Herald Co. v. Tornillo —in which the Supreme Court in 1973 struck down a law that required newspapers that endorsed a candidate for office to provide space to that candidate’s opponents to reply—and rejected the line of precedent from PruneYard Shopping Center v. Robins—a 1980 case in which the Supreme Court ruled that  a state court decision that the California Constitution required a particular shopping center to let  a group set up a table and collect signatures when it allowed other groups to do so did not violate the First Amendment. 

In Moody, the Supreme Court explained that the latter rule applied only to situations in which the host itself was not engaged in an inherently expressive activity. That is, a social media platform deciding what user generated content to select and recommend to its users is inherently expressive, but a shopping center deciding who gets to table on your private property is not. 

So, the Supreme Court said, the 11th Circuit got it right and the 5th Circuit did not. Indeed, the 5th Circuit got it very wrong. In the Supreme Court’s words, the 5th Circuit’s opinion “rests on a serious misunderstanding of First Amendment precedent and principle.” 

This is also the position EFF has been making in courts since at least 2018. As we wrote then, “The law is clear that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate the content. The Supreme Court has long held that private publishers have a First Amendment right to control the content of their publications. Miami Herald Co. v. Tornillo, 418 U.S. 241, 254-44 (1974).” 

This is an important rule in several contexts in addition to the state must-carry laws at issue in these cases. The same rule will apply to laws that restrict the publication and recommendation of lawful speech by social media platforms, or otherwise interfere with content moderation. And it will apply to civil lawsuits brought by those whose content has been removed, demoted, or demonetized. 

Applying this rule, the Supreme Court concluded that Texas’s law could not be constitutionally applied against Facebook’s Newsfeed and YouTube’s homepage. (The Court did not specifically address Florida’s law since it was writing in the context of identifying the 5th Circuit’s errors.)

Which Services Have This First Amendment Right? 

But the Supreme Court’s ruling doesn’t make clear which other functions of which services enjoy this First Amendment right to curate. The Supreme Court specifically analyzed only Facebook’s Newsfeed and YouTube’s homepage. It did not analyze any services offered by other platforms or other functions offered through Facebook, like messaging or event management. 

The opinion does, however, identify some factors that will be helpful in assessing which online services have the right to curate. 

  • Targeting and customizing the publication of user-generated content is protected, whether by algorithm or otherwise, pursuant to the company’s own content rules, guidelines, or standards. The Supreme Court specified that it was not assessing whether the same right would apply to personalized curation decisions made algorithmically solely based on user behavior online without any reference to a site’s own standards or guidelines. 
  • Content moderation such as labeling user posts with warnings, disclaimers, or endorsements for all users, or deletion of posts, again pursuant to a site’s own rules, guidelines, or standards, is protected. 
  • The combination of multifarious voices “to create a distinctive expressive offering” or have a “particular expressive quality” based on a set of beliefs about which voices are appropriate or inappropriate, a process that is often “the product of a wealth of choices,” is protected. 
  • There is no threshold of selectivity a service must surpass to have curatorial freedom, a point we argued in our amicus brief. "That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference,” the Supreme Court said. Courts should not focus on the ratio of rejected to accepted posts in deciding whether the right to curate exists: “It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.” 
  • Curatorial freedom exists even when no one is likely to view a platform’s editorial decisions as their endorsement of the ideas in posts they choose to publish. As the Supreme Court said, “this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution.” 

Considering these factors, the First Amendment right will apply to a wide range of social media services, what the Supreme Court called “Facebook Newsfeed and its ilk” or “its near equivalents.” But its application is less clear to messaging, e-commerce, event management, and infrastructure services.

The Court, Finally, Seems to Understand Content Moderation 

Also noteworthy is that in concluding that content moderation is protected First Amendment activity, the Supreme Court showed that it finally understands how content moderation works. It accurately described the process of how social media platforms decide what any user sees in their feed. For example, it wrote:

In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. 

and 

In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content. 

This comes only a year after Justice Kagan, who wrote this opinion, remarked of the Supreme Court during another oral argument that, “These are not, like, the nine greatest experts on the internet.” In hindsight, that statement seems more of a comment on her colleagues’ understanding than her own. 

Importantly, the Court has now moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Court used that language to describe the process in last term’s case, Twitter v. Taamneh. It is now clear that in the Taamneh case, the court was referring to Twitter’s passive relationship with ISIS, in that Twitter treated it like any other account holder, a relationship that did not support the terrorism aiding and abetting claims made in that case. 

Supreme Court Suggests Competition Law to Address Undue Market Influences 

Another important element of the Supreme Court’s analysis is its treatment of the posited rationale for both states’ speech restrictions: the need to improve or better balance the marketplace of ideas. Both laws were passed in response to perceived censorship of conservative voices, and the states sought to eliminate this perceived political bias from the platform’s editorial practices.  

The Supreme Court found that this was not a sufficiently important reason to limit speech, as is required under First Amendment scrutiny: 

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. . . . The government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey. 

But, as EFF has consistently urged in its amicus briefs, in these cases and others, that ruling does not leave states without any way of addressing harms caused by the market dominance of certain services.   

So, it is very heartening to see the Supreme Court point specifically to competition law as an alternative. In the Supreme Court’s words, “Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access." 

While not mentioned, we think this same reasoning supports many data privacy laws as well.  

Nevertheless, the Court Did Not Strike Down Either Law

Despite this analysis, the Supreme Court did not strike down either law. Rather, it sent the cases back to the lower courts to decide whether the lawsuits were proper facial challenges to the law.  

A facial challenge is a lawsuit that argues that a law is unconstitutional in every one of its applications. Outside of the First Amendment, facial challenges are permissible only if there is no possible constitutional application of the law or, as the courts say, the law “lacks a plainly legitimate sweep.” However, in First Amendment cases, a special rule applies: a law may be struck down as overbroad if there are a substantial number of unconstitutional applications relative to the law’s permissible scope. 

To assess whether a facial challenge is proper, a court is thus required to do a three-step analysis. First, a court must identify a law’s “sweep,” that is, to whom and what actions it applies. Second, the court must then identify which of those possible applications are unconstitutional. Third, the court must then both quantitatively and qualitatively compare the constitutional and unconstitutional applications–principal applications of the law, that is, the ones that seemed to be the law’s primary targets, may be given greater weight in that balancing. The court will strike down the law only if the unconstitutional applications are substantially greater than the constitutional ones.  

The Supreme Court found that neither court conducted this analysis with respect to either the Florida or Texas law. So, it sent both cases back down so the lower courts could do so. Its First Amendment analysis set forth above was to guide the courts in determining which applications of the laws would be unconstitutional. The Supreme Court finds that the Texas law cannot be constitutionally applied to Facebook’s Newsfeed of YouTube’s homepage—but the lower court now needs to complete the analysis. 

While these limitations on facial challenges have been well established for some time, the Supreme Court’s focus on them here was surprising because blatantly unconstitutional laws are challenged facially all the time.  

Here, however, the Supreme Court was reluctant to apply its First Amendment analysis beyond large social media platforms like Facebook’s Newsfeed and its close equivalents. The Court was also unsure whether and how either law would be applied to scores of other online services, such as email, direct messaging, e-commerce, payment apps, ride-hailing apps, and others. It wants the lower courts to look at those possible applications first. 

This decision thus creates a perverse incentive for states to pass laws that by their language broadly cover a wide range of activities, and in doing so make a facial challenge more difficult.

For example, the Florida law defines covered social media platforms as "any information service, system, Internet search engine, or access software provider that does business in this state and provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site” which has either gross annual revenues of at least $100 million or at least 100 million monthly individual platform participants globally.

Texas HB20, by contrast, defines “social media platforms,” as “an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images,” and specifically excludes ISPs, email providers, online services that are nor primarily composed of user-generated content, and to which the social aspects are incidental to a service’s primary purpose.  

Does this Make the First Amendment Analysis “Dicta”? 

Typically, language in a higher court’s opinion that is necessary to its ultimate ruling is binding on lower courts, while language that is not necessary is merely persuasive “dicta.” Here, the Supreme Court’s ruling was based on the uncertainty about the propriety of the facial challenge, and not the First Amendment issues directly. So, there is some argument that the First Amendment analysis is persuasive but not binding precedent. 

However, the Supreme Court could not responsibly remand the case back to the lower courts to consider the facial challenge question without resolving the split in the circuits, that is, the vastly different ways in which the 5th and 11th Circuits analyzed whether social media content curation is protected by the First Amendment. Without that guidance, neither court would know how to assess whether a particular potential application of the law was constitutional or not. The Supreme Court’s First Amendment analysis thus seems quite necessary and is arguably not dicta. 

 And even if the analysis is merely persuasive, six of the justices found that the editorial and curatorial freedom cases like Miami Herald Co v. Tornillo applied. At a minimum, this signals how they will rule on the issue when it reaches them again. It would be unwise for a lower court to rule otherwise, at least while those six justices remain on the Supreme Court. 

What About the Transparency Mandates

Each law also contains several requirements that the covered services publish information about their content moderation practices. Only one type of these provisions was part of the cases before the Supreme Court, a provision from each law that required covered platforms to provide the user with notice and an explanation of certain content moderation decisions.

Heading into the Supreme Court, it was unclear what legal standard applied to these speech mandates. Was it the undue burden standard, from a case called Zauderer v. Office of Disciplinary Counsel, that applies to mandated noncontroversial and factual disclosures in advertisements and other forms of commercial speech, or the strict scrutiny standard that applies to other mandated disclosures?

The Court remanded this question with the rest of the case. But it did imply, without elaboration, that the Zauderer “undue burden” standard each of the lower courts applied was the correct one.

Tidbits From the Concurring Opinions 

All nine justices on the Supreme Court questioned the propriety of the facial challenges to the laws and favored remanding the cases back to the lower courts. So, officially the case was a unanimous 9-0 decision. But there were four separate concurring opinions that revealed some differences in reasoning, with the most significant difference being that Justices Alito, Thomas, and Gorsuch disagreed with the majority’s First Amendment analysis.

Because a majority of the Supreme Court, five justices, fully supported the First Amendment analysis discussed above, the concurrences have no legal effect. There are, however, some interesting tidbits in them that give hints as to how the justices might rule in future cases.

  • Justice Barrett fully joined the majority opinion. She wrote a separate concurrence to emphasize that the First Amendment issues may play out much differently for services other than Facebook’s Newsfeed and YouTube’s homepage. She expressed a special concern for algorithmic decision-making that does not carry out the platform’s editorial policies. She also noted that a platform’s foreign ownership might affect whether the platform has First Amendment rights, a statement that pretty much everyone assumes is directed at TikTok. 
  • Justice Jackson agreed with the majority that the Miami Herald line of cases was the correct precedent and that the 11th Circuit’s interpretation of the law was correct, whereas the 5th Circuit’s was not. But she did not agree with the majority decision to apply the law to Facebook’s Newsfeed and YouTube’s home page. Rather, the lower courts should do that. She emphasized that the law might be applied differently to different functions of a single service.
  • Justice Alito, joined by Thomas and Gorsuch, emphasized his view that the majority’s First Amendment analysis is nonbinding dicta. He criticized the majority for undertaking the analysis on the record before it. But since the majority did so, he expressed his disagreement with it. He disputed that the Miami Herald line of cases was controlling and raised the possibility that the common carrier doctrine, whereby social media would be treated more like telephone companies, was the more appropriate path. He also questioned whether algorithmic moderation reflects any human’s decision-making and whether community moderation models reflect a platform’s editorial decisions or viewpoints, as opposed to the views of its users.
  • Justice Thomas fully agreed with Justice Alito but wrote separately to make two points. First, he repeated a long-standing belief that the Zauderer “undue burden” standard, and indeed the entire commercial speech doctrine, should be abandoned. Second, he endorsed the common carrier doctrine as the correct law. He also expounded on the dangers of facial challenges. Lastly, Justice Thomas seems to have moved off, at least a little, his previous position that social media platforms were largely neutral pipes that insubstantially engaged with user speech.

How the NetChoice opinion will be viewed by lower courts and what influence it will have on state legislatures and Congress, which continue to seek to interfere with content moderation processes, remains to be seen. 

But the Supreme Court has helpfully resolved a central question and provided a First Amendment framework for analyzing the legality of government efforts to dictate what content social media platforms should or should not publish. 

 

 

 

EFF to Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

Legal intern Danya Hajjaji was the lead author of this post.

The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment rights, EFF and the Knight First Amendment Institute at Columbia University said in an amicus brief.

The Sixth Circuit is set to re-decide Lindke v. Freed, a case that was recently remanded from the Supreme Court. The lawsuit arose after Port Huron, Michigan resident Kevin Lindke left critical comments on City Manager James Freed's Facebook page. Freed retaliated by blocking Lindke from being able to view, much less continue to leave critical comments on, Freed’s public profile. The dispute turned on the nature of Freed’s Facebook account, where updates on his government engagements were interwoven with personal posts.

Public officials who use social media as an extension of their office engage in “state action,” which refers to acting on the government’s behalf. They are bound by the First Amendment and generally cannot engage in censorship, especially viewpoint discrimination, by deleting comments or blocking citizens who criticize them. While social media platforms are private corporate entities, government officials who operate interactive online forums to engage in public discussions and share information are bound by the First Amendment.

The Sixth Circuit initially ruled in Freed’s favor, holding that no state action exists due to the prevalence of personal posts on his Facebook page and the lack of government resources, such as staff members or taxpayer dollars, used to operate it.  

The case then went to the U.S. Supreme Court, where EFF and the Knight Institute filed a brief urging the Court to establish a functional test that finds state action when a government official uses a social media account in furtherance of their public duties, even if the account is also sometimes used for personal purposes.

The U.S. Supreme Court crafted a new two-pronged state action test: a government official’s social media activity is state action if 1) the official “possessed actual authority to speak” on the government’s behalf and 2) “purported to exercise that authority” when speaking on social media. As we wrote when the decision came out, this state action test does not go far enough in protecting internet users who intereact with public officials online. Nevertheless, the Court has finally provided further guidance on this issue as a result.

Now that the case is back in the Sixth Circuit, EFF and the Knight Institute filed a second brief endorsing a broad construction of the Supreme Court’s state action test.

The brief argues that the test’s “authority” prong requires no more than a showing, either through written law or unwritten custom, that the official had the authority to speak on behalf of the government generally, irrespective of the medium of communication—whether an in-person press conference or social media. It need not be the authority to post on social media in particular.

For high-ranking elected officials (such as presidents, governors, mayors, and legislators) courts should not have a problem finding that they have clear and broad authority to speak on government policies and activities. The same is true for heads of government agencies who are also generally empowered to speak on matters broadly relevant to those agencies. For lower-ranking officials, courts should consider the areas of their expertise and whether their social media posts in question were related to subjects within, as the Supreme Court said, their “bailiwick.”

The brief also argues that the test’s “exercise” prong requires courts to engage in, in the words of the Supreme Court, a “fact-specific undertaking” to determine whether the official was speaking on social media in furtherance of their government duties.

This element is easily met where the social media account is owned, created, or operated by the office or agency itself, rather than the official—for example, the Federal Trade Commission’s @FTC account on X (formerly Twitter).

But when an account is owned by the person and is sometimes used for non-governmental purposes, courts must look to the content of the posts. These include those posts from which the plaintiff’s comments were deleted, or any posts the plaintiff would have wished to see or comment on had the official not blocked them entirely. Former President Donald Trump is a salient example, having routinely used his legacy @realDonaldTrump X account, rather than the government-created and operated account @POTUS, to speak in furtherance of his official duties while president.

However, it is often not easy to differentiate between personal and official speech by looking solely at the posts themselves. For example, a social media post could be either private speech reflecting personal political passions, or it could be speech in furtherance of an official’s duties, or both. If this is the case, courts must consider additional factors when assessing posts made to a mixed-use account. These factors can be an account’s appearance, such as whether government logos were used; whether government resources such as staff or taxpayer funds were used to operate the social media account; and the presence of any clear disclaimers as to the purpose of the account.

EFF and the Knight Institute also encouraged the Sixth Circuit to consider the crucial role social media plays in facilitating public participation in the political process and accountability of government officials and institutions. If the Supreme Court’s test is construed too narrowly, public officials will further circumvent their constitutional obligations by blocking critics or removing any trace of disagreement from any social media accounts that are used to support and perform their official duties.

Social media has given rise to active democratic engagement, while government officials at every level have leveraged this to reach their communities, discuss policy issues, and make important government announcements. Excessively restricting any member of the public’s viewpoints threatens public discourse in spaces government officials have themselves opened as public political forums.

Victory! Supreme Court Rules Platforms Have First Amendment Right to Decide What Speech to Carry, Free of State Mandates

Par : David Greene
1 juillet 2024 à 11:31

The Supreme Court today correctly found that social media platforms, like newspapers, bookstores, and art galleries before them, have First Amendment rights to curate and edit the speech of others they deliver to their users, and the government has a very limited role in dictating what social media platforms must and must not publish. Although users remain understandably frustrated with how the large platforms moderate user speech, the best deal for users is when platforms make these decisions instead of the government.  

As we explained in our amicus brief, users are far better off when publishers make editorial decisions free from government mandates. Although the court did not reach a final determination about the Texas and Florida laws, it confirmed that their core provisions are inconsistent with the First Amendment when they force social media sites to publish user posts that are, at best, irrelevant, and, at worst, false, abusive, or harassing. The government’s favored speakers would be granted special access to the platforms, and the government’s disfavored speakers silenced. 

We filed our first brief advocating this position in 2018 and are pleased to see that the Supreme Court has finally agreed. 

Notably, the court emphasizes another point EFF has consistently made: that the First Amendment right to edit and curate user content does not immunize social media platforms and tech companies more broadly from other forms of regulation not related to editorial policy. As the court wrote: “Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects.” The court specifically calls out competition law as one avenue to address problems related to market dominance and lack of user choice. Although not mentioned in the court’s opinion, consumer privacy laws are another available regulatory tool.  

We will continue to urge platforms large and small to adopt the Santa Clara Principles as a human rights framework for content moderation. Further, we will continue to advocate for strong consumer data privacy laws to regulate social media companies’ invasive practices, as well as more robust competition laws that could end the major platforms’ dominance.   

EFF has been urging courts to adopt this position for almost six years. We filed our first amicus brief in November 2018: https://www.eff.org/document/prager-university-v-google-eff-amicus-brief  

EFF’s must-carry laws issue page: https://www.eff.org/cases/netchoice-must-carry-litigation 

Press release for our SCOTUS amicus brief: https://www.eff.org/press/releases/landmark-battle-over-free-speech-eff-urges-supreme-court-strike-down-texas-and 

Direct link to our brief: https://www.eff.org/document/eff-brief-moodyvnetchoice

EFF Statement on Assange Plea Deal

Par : David Greene
25 juin 2024 à 12:27

The United States has now, for the first time in the more than 100-year history of the Espionage Act, obtained an Espionage Act conviction for basic journalistic acts. Here, Assange's Criminal Information is for obtaining newsworthy information from a source, communicating it to the public, and expressing an openness to receiving more highly newsworthy information. This sets a dangerous practical precedent, and all those who value a free press should work to make sure that it never happens again. While we are pleased that Assange can now be freed for time served and return to Australia, these charges should never have been brought.

Additional information about this charge: 

Win for Free Speech! Australia Drops Global Takedown Order Case

As we put it in a blog post last month, no single country should be able to restrict speech across the entire internet. That's why EFF celebrates the news that Australia's eSafety Commissioner is dropping its legal effort to have content on X, the website formerly known as Twitter, taken down across the globe. This development comes just days after EFF and FIRE were granted official intervener status in the case. 

In April, the Commissioner ordered X to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post in Australia, but it declined to block it elsewhere. The Commissioner then asked an Australian court to order a global takedown — securing a temporary order that was not extended. EFF moved to intervene on behalf of X, and legal action was ongoing until this week, when the Commissioner announced she would discontinue Federal Court proceedings. 

We are pleased that the Commissioner saw the error in her efforts and dropped the action. Global takedown orders threaten freedom of expression around the world, create conflicting legal obligations, and lead to the lowest common denominator of internet content being available around the world, allowing the least tolerant legal system to determine what we all are able to read and distribute online. 

As part of our continued fight against global censorship, EFF opposes efforts by individual countries to write the rules for free speech for the entire world. Unfortunately, all too many governments, even democracies, continue to lose sight of how global takedown orders threaten free expression for us all. 

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.  

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

  • the official “possessed actual authority to speak” on the government’s behalf, and
  • the official “purported to exercise that authority when he spoke on social media.”

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”  

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media?

Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”  

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

5 Questions to Ask Before Backing the TikTok Ban

With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk. 

The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions. 

1. Is the TikTok bill about privacy or content?

Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users? 

From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public."

This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”

2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws? 

It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.

It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.

3. If the TikTok bill is about content, how will it avoid violating the First Amendment? 

Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.

The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.

4. Is the TikTok bill a ban or something else? 

Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.” 

Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment. 

5. Does the U.S. support the free flow of information as a fundamental democratic principle? 

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.

In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”

Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Victory! EFF Helps Resist Unlawful Warrant and Gag Order Issued to Independent News Outlet

Over the past month, the independent news outlet Indybay has quietly fought off an unlawful search warrant and gag order served by the San Francisco Police Department. Today, a court lifted the gag order and confirmed the warrant is void. The police also promised the court to not seek another warrant from Indybay in its investigation.

Nevertheless, Indybay was unconstitutionally gagged from speaking about the warrant for more than a month. And the SFPD once again violated the law despite past assurances that it was putting safeguards in place to prevent such violations.

EFF provided pro bono legal representation to Indybay throughout the process.

Indybay’s experience highlights a worrying police tactic of demanding unpublished source material from journalists, in violation of clearly established shield laws. Warrants like the one issued by the police invade press autonomy, chill news gathering, and discourage sources from contributing. While this is a victory, Indybay was still gagged from speaking about the warrant, and it would have had to pay thousands of dollars in legal fees to fight the warrant without pro bono counsel. Other small news organizations might not be so lucky. 

It started on January 18, 2024, when an unknown member of the public published a story on Indybay’s unique community-sourced newswire, which allows anyone to publish news and source material on the website. The author claimed credit for smashing windows at the San Francisco Police Credit Union.

On January 24, police sought and obtained a search warrant that required Indybay to turn over any text messages, online identifiers like IP address, or other unpublished information that would help reveal the author of the story. The warrant also ordered Indybay not to speak about the warrant for 90 days. With the help of EFF, Indybay responded that the search warrant was illegal under both California and federal law and requested that the SFPD formally withdraw it. After several more requests and shortly before the deadline to comply with the search warrant, the police agreed to not pursue the warrant further “at this time.” The warrant became void when it was not executed after 10 days under California law, but the gag order remained in place.

Indybay went to court to confirm the warrant would not be renewed and to lift the gag order. It argued it was protected by California and federal shield laws that make it all but impossible for law enforcement to use a search warrant to obtain unpublished source material from a news outlet. California law, Penal Code § 1524(g), in particular, mandates that “no warrant shall issue” for that information. The Federal Privacy Protection Act has some exceptions, but they were clearly not applicable in this situation. Nontraditional and independent news outlets like Indybay are covered by these laws (Indybay fought this same fight more than a decade ago when one of its photographers successfully quashed a search warrant). And when attempting to unmask a source, an IP address can sometimes be as revealing as a reporter’s notebook. In a previous case, EFF established that IP addresses are among the types of unpublished journalistic information typically protected from forced disclosure by law.

In addition, Indybay argued that the gag order was an unconstitutional content-based prior restraint on speech—noting that the government did not have a compelling interest in hiding unlawful investigative techniques.

Rather than fight the case, the police conceded the warrant was void, promised not to seek another search warrant for Indybay’s information during the investigation, and agreed to lift the gag order. A San Francisco Superior Court Judge signed an order confirming that.

That this happened at all is especially concerning since the SFPD had agreed to institute safeguards following its illegal execution of a search warrant against freelance journalist Bryan Carmody in 2019. In settling a lawsuit brought by Carmody, the SFPD agreed to ensure all its employees were aware of its policies concerning warrants to journalists. As a result the department instituted internal guidance and procedures, which do not all appear to have been followed with Indybay.

Moreover, the search warrant and gag order should never have been signed by the court given that it was obviously directed to a news organization. We call on the court and the SFPD to meet with those representing journalists to make sure that we don't have to deal with another unconstitutional gag order and search warrant in another few years.

The San Francisco Police Department's public statement on this case is incomplete. It leaves out the fact that Indybay was gagged for more than a month and that it was only Indybay's continuous resistance that prevented the police from acting on the warrant. It also does not mention whether the police department's internal policies were followed in this case. For one thing, this type of warrant requires approval from the chief of police before it is sought, not after. 

Read more here: 

Stipulated Order

Motion to Quash

Search Warrant

Trujillo Declaration

Burdett Declaration

SFPD Press Release

The U.S. Supreme Court’s Busy Year of Free Speech and Tech Cases: 2023 Year in Review

The U.S. Supreme Court has taken an unusually active interest in internet free speech issues. EFF participated as amicus in a whopping nine cases before the court this year. The court decided four of those cases, and decisions in the remaining five cases will be published in 2024.   

Of the four cases decided this year, the results are a mixed bag. The court showed restraint and respect for free speech rights when considering whether social media platforms should be liable for ISIS content, while also avoiding gutting one of the key laws supporting free speech online. The court also heightened protections for speech that may rise to the level of criminal “true threats.” But the court declined to overturn an overbroad law that relates to speech about immigration.  

Next year, we’re hopeful that the court will uphold the right of individuals to comment on government officials’ social media pages, when those pages are largely used for governmental purposes and even when the officials don’t like what those comments say; and that the court will strike down government overreach in mandating what content must stay up or come down online, or otherwise distorting social media editorial decisions. 

Platform Liability for Violent Extremist Content 

Cases: Gonzalez v. Google and Twitter v. Taamneh – DECIDED 

The court, in two similar cases, declined to hold social media companies—YouTube and Twitter—responsible for aiding and abetting terrorist violence allegedly caused by user-generated content posted to the platforms. The case against YouTube (Google) was particularly concerning because the plaintiffs had asked the court to narrow the scope of Section 230 when internet intermediaries recommend third-party content. As we’ve said for decades, Section 230 is one of the most important laws for protecting internet users’ speech. We argued in our brief that narrowing Section 230, the law that generally protects users and online services from lawsuits based on content created by others, in any way would lead to increased censorship and a degraded online experience for users; as would holding platforms responsible for aiding and abetting acts of terrorism. Thankfully, the court declined to address the scope of Section 230 and held that the online platforms may not generally be held liable under the Anti-Terrorism Act. 

True Threats Online 

Case: Counterman v. Colorado – DECIDED 

The court considered what state of mind a speaker must have to lose First Amendment protection and be liable for uttering “true threats,” in a case involving Facebook messages that led to the defendant’s conviction. The issue before the court was whether any time the government seeks to prosecute someone for threatening violence against another person, it must prove that the speaker had some subjective intent to threaten the victim, or whether the government need only prove, objectively, that a reasonable person would have known that their speech would be perceived as a threat. We urged the court to require some level of subjective intent to threaten before an individual’s speech can be considered a "true threat" not protected by the First Amendment. In our highly digitized society, online speech like posts, messages, and emails, can be taken out of context, repackaged in ways that distort or completely lose their meaning, and spread far beyond the intended recipients. This higher standard is thus needed to protect speech such as humor, art, misunderstandings, satire, and misrepresentations. The court largely agreed and held that subjective understanding by the defendant is required: that, at minimum, the speaker was in fact subjectively aware of the serious risk that the recipient of the statements would regard their speech as a threat, but recklessly made them anyway.  

Encouraging Illegal Immigration  

Case: U.S. v. Hansen - DECIDED  

The court upheld the Encouragement Provision that makes it a federal crime to “encourage or induce” an undocumented immigrant to “reside” in the United States, if one knows that such “coming to, entry, or residence” in the U.S. will be in violation of the law. We urged the court to uphold the Ninth Circuit’s ruling, which found that the language is unconstitutionally overbroad under the First Amendment because it threatens an enormous amount of protected online speech. This includes prohibiting, for example, encouraging an undocumented immigrant to take shelter during a natural disaster, advising an undocumented immigrant about available social services, or even providing noncitizens with Know Your Rights resources or certain other forms of legal advice. Although the court declined to hold the law unconstitutional, it sharply narrowed the law’s impact on free speech, ruling that the Encouragement Provision applies only to the intentional solicitation or facilitation of immigration law violations. 

Public Officials Censoring Social Media Comments 

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – PENDING 

The court is considering a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The First Amendment generally prohibits viewpoint-based discrimination in government forums open to speech by members of the public. The threshold question in these cases is what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions. We argued that the court should establish a functional test that looks at how an account is actually used. It is important that the court make clear once and for all that public officials using social media in furtherance of their official duties can’t sidestep their First Amendment obligations because they’re using nominally “personal” or preexisting campaign accounts. 

Government Mandates for Platforms to Carry Certain Online Speech 

Cases: NetChoice v. Paxton and Moody v. NetChoice - PENDING 

The court will hear arguments this spring about whether laws in Florida and Texas violate the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. Although the state laws differ in how they operate and the type of mandates they impose, each law represents a profound intrusion into social media sites’ ability to decide for themselves what speech they will publish and how they will present it to users. As we argued in urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs. To be sure, internet users are rightly frustrated with social media services’ content moderation practices, which are often perplexing and mistaken. But permitting Florida and Texas to deploy the state’s coercive power in retaliation for those concerns raises significant First Amendment and human rights concerns. 

Government Coercion in Content Moderation 

Case: Murthy v. Missouri – PENDING 

Last, but certainly not least, the court is considering the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech. But the court has not previously applied this principle to government communications with social media sites about user posts. We urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible. We also urged the court to make clear that courts reviewing claims of impermissible government involvement in content moderation are obligated to conduct fact and context-specific inquires. And we argued that close cases should go against the government, as it is the best positioned to ensure that its involvement in platforms’ policy enforcement decisions remains permissible. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

❌
❌