Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Federal Court Dismisses X's Anti-Speech Lawsuit Against Watchdog

Par : Aaron Mackey
5 avril 2024 à 09:01

This post was co-written by EFF legal intern Melda Gurakar.

Researchers, journalists, and everyone else has a First Amendment right to criticize social media platforms and their content moderation practices without fear of being targeted by retaliatory lawsuits, a federal court recently ruled.

The decision by a federal court in California to dismiss a lawsuit brought by Elon Musk’s X against the Center for Countering Digital Hate (CCDH), a nonprofit organization dedicated to fighting online hate speech and misinformation, is a win for greater transparency and accountability of social media companies. The court’s ruling in X Corp. v. Center for Countering Digital Hate Ltd. shows that X had no legitimate basis to bring its case in the first place, as the company used the lawsuit to penalize the CCDH for criticizing X and to deter others from doing so.

Vexatious cases like these are known as Strategic Lawsuits Against Public Participation, or SLAPPs. These lawsuits chill speech because they burden speakers who engaged in protected First Amendment activity with the financial costs and stress of having to fight litigation, rather than seeking to vindicate legitimate legal claims. The goal of these suits is not to win, but to inflict harm on the opposing party for speaking. We are grateful that the court saw X’s lawsuit was a SLAPP and dismissed it, ruling that the claims lacked legal merit and that the suit violated California’s anti-SLAPP statute.

The lawsuit filed in July 2023 accused the CCDH of unlawfully accessing and scraping data from its platform, which X argued CCDH used in order to harm X Corp.'s reputation and, by extension, its business operations, leading to lost advertising revenue and other damages. X argued that CCDH had initiated this calculated “scare campaign” aimed at deterring advertisers from engaging with the platform, supposedly resulting in a significant financial loss for X. Moreover, X claimed that the CCDH breached its Terms of Service contract as a user of X.

The court ruled that X’s accusations were insufficient to bypass the protective shield of California's anti-SLAPP statute. Furthermore, the court's decision to dismiss X Corp.'s claims, including those related to breach of contract and alleged infringements of the Computer Fraud and Abuse Act, stemmed from X Corp.'s inability to convincingly allege or demonstrate significant losses attributable to CCDH's activities. This outcome not only is a triumph for CCDH, but also validates the anti-SLAPP statute's role in safeguarding critical research efforts against baseless legal challenges. Thankfully, the court also rejected X’s claim under the federal Computer Fraud and Abuse Act (CFAA). X had argued that the CFAA barred CCDH’s scraping of public tweets—a erroneous reading of the law. The court found that regardless of that argument, the X had not shown a “loss” of the type protected by the CFAA, such as technological harms to data or computers.

EFF, alongside the ACLU of Northern California and the national ACLU, filed an amicus brief in support of CCDH, arguing that X Corp.'s lawsuit mischaracterized a nonviable defamation claim as a breach of contract to retaliate against CCDH. The brief supported CCDH's motion to dismiss, arguing that the term of service against CCDH as it pertains to data scraping should be deemed void, and is contrary to the public interest. It also warned of a potential chilling effect on research and activism that rely on digital platforms to gather information.

The ramifications of X Corp v. CCDH reach far beyond this decision. X Corp v. CCDH affirms the Center for Countering Digital Hate's freedom to conduct and publish research that critiques X Corp., and sets precedent that protects critical voices from being silenced online. We are grateful that the court reached this correct result and affirmed that people should not be targeted by lawsuits for speaking critically of powerful institutions.

EFF Seeks Greater Public Access to Patent Lawsuit Filed in Texas

You’re not supposed to be able to litigate in secret in the U.S. That’s especially true in a patent case dealing with technology that most internet users rely on every day.

 Unfortunately, that’s exactly what’s happening in a case called Entropic Communications, LLC v. Charter Communications, Inc. The parties have made so much of their dispute secret that it is hard to tell how the patents owned by Entropic might affect the Data Over Cable Service Interface Specifications (DOCSIS) standard, a key technical standard that ensures cable customers can access the internet.

In Entropic, both sides are experienced litigants who should know that this type of sealing is improper. Unfortunately, overbroad secrecy is common in patent litigation, particularly in cases filed in the U.S. District Court for the Eastern District of Texas.

EFF has sought to ensure public access to lawsuits in this district for years. In 2016, EFF intervened in another patent case in this very district, arguing that the heavy sealing by a patent owner called Blue Spike violated the public’s First Amendment and common law rights. A judge ordered the case unsealed.

As Entropic shows, however, parties still believe they can shut down the public’s access to presumptively public legal disputes. This secrecy has to stop. That’s why EFF, represented by the Science, Health & Information Clinic at Columbia Law School, filed a motion today seeking to intervene in the case and unseal a variety of legal briefs and evidence submitted in the case. EFF’s motion argues that the legal issues in the case and their potential implications for the DOCSIS standard are a matter of public concern and asks the district court judge hearing the case to provide greater public access.

Protective Orders Cannot Override The Public’s First Amendment Rights

As EFF’s motion describes, the parties appear to have agreed to keep much of their filings secret via what is known as a protective order. These court orders are common in litigation and prevent the parties from disclosing information that they obtain from one another during the fact-gathering phase of a case. Importantly, protective orders set the rules for information exchanged between the parties, not what is filed on a public court docket.

The parties in Entropic, however, are claiming that the protective order permits them to keep secret both legal arguments made in briefs filed with the court as well as evidence submitted with those filings. EFF’s motion argues that this contention is incorrect as a matter of law because the parties cannot use their agreement to abrogate the public’s First Amendment and common law rights to access court records. More generally, relying on protective orders to limit public access is problematic because parties in litigation often have little interest or incentive to make their filings public.

Unfortunately, parties in patent litigation too often seek to seal a variety of information that should be public. EFF continues to push back on these claims. In addition to our work in Texas, we have also intervened in a California patent case, where we also won an important transparency ruling. The court in that case prevented Uniloc, a company that had filed hundreds of patent lawsuits, from keeping the public in the dark as to its licensing activities.

That is why part of EFF’s motion asks the court to clarify that parties litigating in the Texas district court cannot rely on a protective order for secrecy and that they must instead seek permission from the court and justify any claim that material should be filed under seal.

On top of clarifying that the parties’ protective orders cannot frustrate the public’s right to access federal court records, we hope the motion in Entropic helps shed light on the claims and defenses at issue in this case, which are themselves a matter of public concern. The DOCSIS standard is used in virtually all cable internet modems around the world, so the claims made by Entropic may have broader consequences for anyone who connects to the internet via a cable modem.

It’s also impossible to tell if Entropic might want to sue more cable modem makers. So far, Entropic has sued five big cable modem vendors—Charter, Cox, Comcast, DISH TV, and DirecTV—in more than a dozen separate cases. EFF is hopeful that the records will shed light on how broadly Entropic believes its patents can reach cable modem technology.

EFF is extremely grateful that Columbia Law School’s Science, Health & Information Clinic could represent us in this case. We especially thank the student attorneys who worked on the filing, including Sean Hong, Gloria Yi, Hiba Ismail, and Stephanie Lim, and the clinic’s director, Christopher Morten.

EFF to California Appellate Court: Reject Trial Judge’s Ruling That Would Penalize Beneficial Features and Tools on Social Media

EFF legal intern Jack Beck contributed to this post.

A California trial court recently departed from wide-ranging precedent and held that Snap, Inc., the maker of Snapchat, the popular social media app, had created a “defective” product by including features like disappearing messages, the ability to connect with people through mutual friends, and even the well-known “Stories” feature. We filed an amicus brief in the appeal, Neville v. Snap, Inc., at the California Court of Appeal, and are calling for the reversal of the earlier decision, which jeopardizes protections for online intermediaries and thus the free speech of all internet users.

At issue in the case is Section 230, without which the free and open internet as we know it would not exist. Section 230 provides that online intermediaries are generally not responsible for harmful user-generated content. Rather, responsibility for what a speaker says online falls on the person who spoke.

The plaintiffs are a group of parents whose children overdosed on fentanyl-laced drugs obtained through communications enabled by Snapchat. Even though the harm they suffered was premised on user-generated content—messages between the drug dealers and their children—the plaintiffs argued that Snapchat is a “defective product.” They highlighted various features available to all users on Snapchat, including disappearing messages, arguing that the features facilitate illegal drug deals.

Snap sought to have the case dismissed, arguing that the plaintiffs’ claims were barred by Section 230. The trial court disagreed, narrowly interpreting Section 230 and erroneously holding that the plaintiffs were merely trying to hold the company responsible for its own “independent tortious conduct—independent, that is, of the drug sellers’ posted content.” In so doing, the trial court departed from congressional intent and wide-ranging California and federal court precedent.

In a petition for a writ of mandate, Snap urged the appellate court to correct the lower court’s distortion of Section 230. The petition rightfully contends that the plaintiffs are trying to sidestep Section 230 through creative pleading. The petition argues that Section 230 protects online intermediaries from liability not only for hosting third-party content, but also for crucial editorial decisions like what features and tools to offer content creators and how to display their content.

We made two arguments in our brief supporting Snap’s appeal.

First, we explained that the features the plaintiffs targeted—and which the trial court gave no detailed analysis of—are regular parts of Snapchat’s functionality with numerous legitimate uses. Take Snapchat’s option to have messages disappear after a certain period of time. There are times when the option to make messages disappear can be crucial for protecting someone’s safety—for example, dissidents and journalists operating in repressive regimes, or domestic violence victims reaching out for support. It’s also an important privacy feature for everyday use. Simply put: the ability for users to exert control over who can see their messages and for how long, advances internet users’ privacy and security under legitimate circumstances.

Second, we highlighted in our brief that this case is about more than concerned families challenging a big tech company. Our modern communications are mediated by private companies, and so any weakening of Section 230 immunity for internet platforms would stifle everyone’s ability to communicate. Should the trial court’s ruling stand, Snapchat and similar platforms will be incentivized to remove features from their online services, resulting in bland and sanitized—and potentially more privacy invasive and less secure—communications platforms. User experience will be degraded as internet platforms are discouraged from creating new features and tools that facilitate speech. Companies seeking to minimize their legal exposure for harmful user-generated content will also drastically increase censorship of their users, and smaller platforms trying to get off the ground will fail to get funding or will be forced to shut down.

There’s no question that what happened in this case was tragic, and people are right to be upset about some elements of how big tech companies operate. But Section 230 is the wrong target. We strongly advocate for Section 230, yet when a tech company does something legitimately irresponsible, the statute still allows for them to be liable—as Snap knows from a lawsuit that put an end to its speed filter.

If the trial court’s decision is upheld, internet platforms would not have a reliable way to limit liability for the services they provide and the content they host. They would face too many lawsuits that cost too much money to defend. They would be unable to operate in their current capacity, and ultimately the internet would cease to exist in its current form. Billions of internet users would lose.

Analyzing KOSA’s Constitutional Problems In Depth 

Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

The Foilies 2024

Recognizing the worst in government transparency.

The Foilies are co-written by EFF and MuckRock and published in alternative newspapers around the country through a partnership with the Association of Alternative Newsmedia

We're taught in school about checks and balances between the various branches of government, but those lessons tend to leave out the role that civilians play in holding officials accountable. We're not just talking about the ballot box, but the everyday power we all have to demand government agencies make their records and data available to public scrutiny.

At every level of government in the United States (and often in other countries), there are laws that empower the public to file requests for public records. They go by various names—Freedom of Information, Right-to-Know, Open Records, or even Sunshine laws—but all share the general concept that because the government is of the people, its documents belong to the people. You don't need to be a lawyer or journalist to file these; you just have to care.

It's easy to feel powerless in these times, as local newsrooms close, and elected officials embrace disinformation as a standard political tool. But here's what you can do, and we promise it'll make you feel better: Pick a local agency—it could be a city council, a sheriff's office or state department of natural resources—and send them an email demanding their public record-request log, or any other record showing what requests they receive, how long it took them to respond, whether they turned over records, and how much they charged the requester for copies. Many agencies even have an online portal that makes it easier, or you can use MuckRock’s records request tool. (You can also explore other people's results that have been published on MuckRock's FOIA Log Explorer.) That will send the message to local leaders they're on notice. You may even uncover an egregious pattern of ignoring or willfully violating the law.

The Foilies are our attempt to call out these violations each year during Sunshine Week, an annual event (March 10-16 this year) when advocacy groups, news organizations and citizen watchdogs combine efforts to highlight the importance of government transparency laws. The Electronic Frontier Foundation and MuckRock, in partnership with the Association of Alternative Newsmedia, compile the year's worst and most ridiculous responses to public records requests and other attempts to thwart public access to information, including through increasing attempts to gut the laws guaranteeing this access—and we issue these agencies and officials tongue-in-cheek "awards" for their failures.

Sometimes, these awards actually make a difference. Last year, Mendocino County in California repealed its policy of charging illegal public records fees after local journalists and activists used The Foilies’ "The Transparency Tax Award" in their advocacy against the rule.

This year marks our 10th annual accounting of ridiculous redactions, outrageous copying fees, and retaliatory attacks on requesters—and we have some doozies for the ages.

The "Winners"

The Not-So-Magic Word Award: Augusta County Sheriff’s Office, Va.

Public records laws exist in no small part because corruption, inefficiency and other malfeasance happen, regardless of the size of the government. The public’s right to hold these entities accountable through transparency can prevent waste and fraud.

Of course, this kind of oversight can be very inconvenient to those who would like a bit of secrecy. Employees in Virginia’s Augusta County thought they’d found a neat trick for foiling Virginia's Freedom of Information Act.

Consider: “NO FOIA”

In an attempt to withhold a bunch of emails they wanted to hide from the public eye, employees in Augusta County began tagging their messages with “NO FOIA,” as an apparent incantation staff believed could ward off transparency. Of course, there are no magical words that allow officials to evade transparency laws; the laws assume all government records are public, so agencies can’t just say they don’t want records released.

Fortunately, at least one county employee thought that breaking the law must be a little more complicated than that, and this person went to Breaking Through News to blow the whistle.

Breaking Through News sent a FOIA request for those “NO FOIA” emails. The outlet received just 140 emails of the 1,212 that the county indicated were responsive, and those released records highlighted the county’s highly suspect approach to withholding public records. Among the released records were materials like the wages for the Sheriff Office employees (clearly a public record), the overtime rates (clearly a public record) and a letter from the sheriff deriding the competitive wages being offered at other county departments (embarrassing but still clearly a public record). 

Other clearly public records, according to a local court, included recordings of executive sessions that the commissioners had entered illegally, which Breaking Through News learned about through the released records. They teamed up with the Augusta Free Press to sue for access to the recordings, a suit they won last month. They still haven’t received the awarded records, and it’s possible that Augusta County will appeal. Still, it turned out that, thanks to the efforts of local journalists, their misguided attempt to conjure a culture of “No FOIA” in August County actually brought them more scrutiny and accountability.

The Poop and Pasta Award: Richlands, Va.
Spaghetti noodles spilling out of a mailbox.

Government officials retaliated against a public records requester by filling her mailbox with noodles.

In 2020, Laura Mollo of Richlands, Va., discovered that the county 911 center could not dispatch Richlands residents’ emergency calls: While the center dispatched all other county 911 calls, calls from Richlands had to be transferred to the Richlands Police Department to be handled. After the Richlands Town Council dismissed Mollo’s concerns, she began requesting records under the Virginia Freedom of Information Act. The records showed that Richlands residents faced lengthy delays in connecting with local emergency services. On one call, a woman pleaded for help for her husband, only to be told that county dispatch couldn’t do anything—and her husband died during the delay. Other records Mollo obtained showed that Richlands appeared to be misusing its resources.

You would hope that public officials would be grateful that Mollo uncovered the town’s inadequate emergency response system and budget mismanagement. Well, not exactly: Mollo endured a campaign of intimidation and harassment for holding the government accountable. Mollo describes how her mailbox was stuffed with cow manure on one occasion, and spaghetti on another (which Mollo understood to be an insult to her husband’s Italian heritage). A town contractor harassed her at her home; police pulled her over; and Richlands officials even had a special prosecutor investigate her.

But this story has a happy ending: In November 2022, Mollo was elected to the Richlands Town Council. The records she uncovered led Richlands to change over to the county 911 center, which now dispatches Richlands residents’ calls. And in 2023, the Virginia Coalition for Open Government recognized Mollo by awarding her the Laurence E. Richardson Citizen Award for Open Government. Mollo’s recognition is well-deserved. Our communities are indebted to people like her who vindicate our right to public records, especially when they face such inexcusable harassment for their efforts.

The Error 404 Transparency Not Found Award: FOIAonline

In 2012, FOIAonline was launched with much fanfare as a way to bring federal transparency into the late 20th century. No longer would requesters have to mail or fax requests. Instead, FOIAonline was a consolidated starting point, managed by the Environmental Protection Agency (EPA), that let you file Freedom of Information Act requests with numerous federal entities from within a single digital interface.

Even better, the results of requests would be available online, meaning that if someone else asked for interesting information, it would be available to everyone, potentially reducing the number of duplicate requests. It was a good idea—but it was marred from the beginning by uneven uptake, agency infighting, and inscrutable design decisions that created endless headaches. In its latter years, FOIAonline would go down for days or weeks at a time without explanation. The portal saw agency after agency ditch the platform in favor of either homegrown solutions or third-party vendors.

Last year, the EPA announced that the grand experiment was being shuttered, leaving thousands of requesters uncertain about how and where to follow up on their open requests, and unceremoniously deleting millions of documents from public access without any indication of whether they would be made available again.

In a very on-brand twist of the knife, the decision to sunset FOIAonline was actually made two years prior, after an EPA office reported in a presentation that the service was likely to enter a “financial death spiral” of rising costs and reduced agency usage. Meanwhile, civil-society organizations such as MuckRock, the Project on Government Oversight, and the Internet Archive have worked to resuscitate and make available at least some of the documents the site used to host.

The Literary Judicial Thrashing of the Year Award: Pennridge, Penn., School District

Sometimes when you're caught breaking the law, the judge will throw the book at you. In the case of Pennridge School District in Bucks County, Penn. Judge Jordan B. Yeager catapulted an entire shelf of banned books at administrators for violating the state's Right-to-Know Law.

The case begins with Darren Laustsen, a local parent who was alarmed by a new policy to restrict access to books that deal with “sexualized content,” seemingly in lockstep with book-censorship laws happening around the country. Searching the school library's catalog, he came across a strange trend: Certain controversial books that appeared on other challenged-book lists had been checked out for a year or more. Since students are only allowed to check out books for a week, he (correctly) suspected that library staff were checking them out themselves to block access.

So he filed a public records request for all books checked out by non-students. Now, it's generally important for library patrons to have their privacy protected when it comes to the books they read—but it's a different story if public employees are checking out books as part of their official duties and effectively enabling censorship. The district withheld the records, provided incomplete information, and even went so far as to return books and re-check them out under a student's account in order to obscure the truth. And so Laustsen sued.

The judge issued a scathing and literarily robust ruling: “In short, the district altered the records that were the subject of the request, thwarted public access to public information, and effectuated a cover-up of faculty, administrators, and other non-students’ removal of books from Pennridge High School’s library shelves." The opinion was peppered with witty quotes from historically banned books, including Nineteen Eighty-Four, Alice in Wonderland, The Art of Racing in the Rain and To Kill a Mockingbird. After enumerating the district's claims that later proved to be inaccurate, he cited Kurt Vonnegut's infamous catchphrase from Slaughterhouse-Five: "So it goes."

The Photographic Recall Award: Los Angeles Police Department

Police agencies seem to love nothing more than trumpeting an arrest with an accompanying mugshot—but when the tables are turned, and it’s the cops’ headshots being disclosed, they seem to lose their minds and all sense of the First Amendment.

This unconstitutional escapade began (and is still going) after a reporter and police watchdog published headshots of Los Angeles Police Department officers, which they lawfully obtained via a public records lawsuit. LAPD cops and their union were furious. The city then sued the reporter, Ben Camacho, and the Stop LAPD Spying Coalition, demanding that they remove the headshots from the internet and return the records to LAPD.

You read that right: After a settlement in a public records lawsuit required the city to disclose the headshots, officials turned around and sued the requester for, uh, disclosing those same records, because the city claimed it accidentally released pictures of undercover cops.

But it gets worse: Last fall, a trial court denied a motion to throw out the city’s case seeking to claw back the images; Camacho and the coalition have appealed that decision and have not taken the images offline. And in February, the LAPD sought to hold Camacho and the coalition liable for damages it may face in a separate lawsuit brought against it by hundreds of police officers whose headshots were disclosed.

We’re short on space, but we’ll try explain the myriad ways in which all of the above is flagrantly unconstitutional: The First Amendment protects Camacho and the coalition’s ability to publish public records they lawfully obtained, prohibits courts from entering prior restraints that stop protected speech, and limits the LAPD’s ability to make them pay for any mistakes the city made in disclosing the headshots. Los Angeles officials should be ashamed of themselves—but their conduct shows that they apparently have no shame.

The Cops Anonymous Award: Chesterfield County Police Department, Va.

The Chesterfield County Police Department in Virginia refused to disclose the names of hundreds of police officers to a public records requester on this theory: Because the cops might at some point go undercover, the public could never learn their identities. It’s not at all dystopian to claim that a public law enforcement agency needs to have secret police!

Other police agencies throughout the state seem to deploy similar secrecy tactics, too.

The Keep Your Opinions to Yourself Award: Indiana Attorney General Todd Rokita

In March 2023, Indiana Attorney General Todd Rokita sent a letter to medical providers across the state demanding information about the types of gender-affirming care they may provide to young Hoosiers. But this was no unbiased probe: Rokita made his position very clear when he publicly blasted these health services as “the sterilization of vulnerable children” that “could legitimately be considered child abuse.” He made claims to the media that the clinics’ main goals weren’t to support vulnerable youth, but to rake in cash.

Yet as loud as he was about his views in the press, Rokita was suddenly tight-lipped once the nonprofit organization American Oversight filed a public records request asking for all the research, analyses and other documentation that he used to support his claims. Although his agency located 85 documents that were relevant to their request, Rokita refused to release a single page, citing a legal exception that allows him to withhold deliberative documents that are “expressions of opinion or are of a speculative nature.”

Perhaps if Rokita’s opinions on gender-affirming care weren't based on facts, he should've kept those opinions and speculations to himself in the first place.

The Failed Sunshine State Award: Florida Gov. Ron DeSantis

Florida’s Sunshine Law is known as one of the strongest in the nation, but Gov. Ron DeSantis spent much of 2023 working, pretty successfully, to undermine its superlative status with a slew of bills designed to weaken public transparency and journalism.

In March, DeSantis was happy to sign a bill to withhold all records related to travel done by the governor and a whole cast of characters. The law went into effect just more than a week before the governor announced his presidential bid. In addition, DeSantis has asserted his “executive privilege” to block the release of public records in a move that, according to experts like media law professor Catherine Cameron, is unprecedented in Florida’s history of transparency.

DeSantis suspended his presidential campaign in January. That may affect how many trips he’ll be taking out-of-state in the coming months, but it won’t undo the damage of his Sunshine-slashing policies.

Multiple active lawsuits are challenging DeSantis over his handling of Sunshine Law requests. In one, The Washington Post is challenging the constitutionality of withholding the governor’s travel records. In that case, a Florida Department of Law Enforcement official last month claimed the governor had delayed the release of his travel records. Nonprofit watchdog group American Oversight filed a lawsuit in February, challenging “the unjustified and unlawful delay” in responding to requests, citing a dozen records requests to the governor’s office that have been pending for one to three years.

“It’s stunning, the amount of material that has been taken off the table from a state that many have considered to be the most transparent,” Michael Barfield, director of public access for the Florida Center for Government Accountability (FCGA), told NBC News. The FCGA is now suing the governor’s office for records on flights of migrants to Massachusetts. “We’ve quickly become one of the least transparent in the space of four years.”

The Self-Serving Special Session Award: Arkansas Gov. Sarah Huckabee Sanders

By design, FOIA laws exist to help the people who pay taxes hold the people who spend those taxes accountable. In Arkansas, as in many states, taxpayer money funds most government functions: daily office operations, schools, travel, dinners, security, etc. As Arkansas’ governor, Sarah Huckabee Sanders has flown all over the country, accompanied by members of her family and the Arkansas State Police. For the ASP alone, the people of Arkansas paid $1.4 million in the last half of last year.

Last year, Sanders seemed to tire of the scrutiny being paid to her office and her spending. Sanders cited her family’s safety as she tried to shutter any attempts to see her travel records, taking the unusual step of calling a special session of the state Legislature to protect herself from the menace of transparency.

Notably, the governor had also recently been implicated in an Arkansas Freedom of Information Act case for these kinds of records.

The attempt to gut the law included a laundry list of carve-outs unrelated to safety, such as walking back the ability of public-records plaintiffs to recover attorney's fees when they win their case. Other attempts to scale back Arkansas' FOIA earlier in the year had not passed, and the state attorney general’s office was already working to study what improvements could be made to the law.  

Fortunately, the people of Arkansas came out to support the principle of government transparency, even as their governor decided she shouldn’t need to deal with it anymore. Over a tense few days, dozens of Arkansans lined up to testify in defense of the state FOIA and the value of holding elected officials, like Sanders, accountable to the people.

By the time the session wound down, the state Legislature had gone through multiple revisions. The sponsors walked back most of the extreme asks and added a requirement for the Arkansas State Police to provide quarterly reports on some of the governor’s travel costs. However, other details of that travel, like companions and the size of the security team, ultimately became exempt. Sanders managed to twist the whole fiasco into a win, though it would be a great surprise if the Legislature didn’t reconvene this year with some fresh attempts to take a bite out of FOIA.

While such a blatant attempt to bash public transparency is certainly a loser move, it clearly earns Sanders a win in the FOILIES—and the distinction of being one of the least transparent government officials this year.

The Doobie-ous Redaction Award: U.S. Department of Health and Human Services and Drug Enforcement Administration
A cannabis leaf covered with black bar redactions.

The feds heavily redacted an email about reclassifying cannabis from a Schedule I to a Schedule III substance.

Bloomberg reporters got a major scoop when they wrote about a Health and Human Services memo detailing how health officials were considering major changes to the federal restrictions on marijuana, recommending reclassifying it from a Schedule I substance to Schedule III.

Currently, the Schedule I classification for marijuana puts it in the same league as heroin and LSD, while Schedule III classification would indicate lower potential for harm and addiction along with valid medical applications.

Since Bloomberg viewed but didn’t publish the memo itself, reporters from the Cannabis Business Times filed a FOIA request to get the document into the public record. Their request was met with limited success: HHS provided a copy of the letter, but redacted virtually the entire document besides the salutation and contact information. When pressed further by CBT reporters, the DEA and HHS would only confirm what the redacted documents had already revealed—virtually nothing.

HHS handed over the full, 250-page review several months later, after a lawsuit was filed by an attorney in Texas. The crucial information the agencies had fought so hard to protect: “Based on my review of the evidence and the FDA’s recommendation, it is my recommendation as the Assistant Secretary for Health that marijuana should be placed in Schedule III of the CSA.”

The “Clearly Releasable,” Clearly Nonsense Award: U.S. Air Force

Increasingly, federal and state government agencies require public records requesters to submit their requests through online portals. It’s not uncommon for these portals to be quite lacking. For example, some portals fail to provide space to include information crucial to requests.

But the Air Force deserves special recognition for the changes it made to its submission portal, which asked requesters if they would  agree to limit their requests to  information that the Air Force deemed "clearly releasable.” You might think, “surely the Air Force defined this vague ‘clearly releasable’ information.” Alas, you’d be wrong: The form stated only that requesters would “agree to accept any information that will be withheld in compliance with the principles of FOIA exemptions as a full release.” In other words, the Air Force asked requesters to give up the fight over information before it even began, and to accept the Air Force's redactions and rejections as non-negotiable.

Following criticism, the Air Force jettisoned the update to its portal to undo these changes. Moving forward, it's "clear" that it should aim higher when it comes to transparency.

The Scrubbed Scrubs Award: Ontario Ministry of Health, Canada

Upon taking office in 2018, Ontario Premier Doug Ford was determined to shake up the Canadian province’s healthcare system. His administration has been a bit more tight-lipped, however, about the results of that invasive procedure. Under Ford, Ontario’s Ministry of Health is fighting the release of information on how understaffed the province’s medical system is, citing “economic and other interests.” The government’s own report, partially released to Global News, details high attrition as well as “chronic shortages” of nurses.

The reporters’ attempts to find out exactly how understaffed the system is, however, were met with black-bar redactions. The government claims that releasing the information would negatively impact “negotiating contracts with health-care workers.” However, the refusal to release the information hasn’t helped solve the problem; instead, it’s left the public in the dark about the extent of the issue and what it would actually cost to address it.

Global News has appealed the withholdings. That process has dragged on for over a year, but a decision is expected soon.

The Judicial Blindfold Award: Mississippi Justice Courts

Courts are usually transparent by default. People can walk in to watch hearings and trials, and can get access to court records online or at the court clerk’s office. And there are often court rules or state laws that ensure courts are public.

Apparently, the majority of Mississippi Justice Courts don’t feel like following those rules. An investigation by ProPublica and the Northeast Mississippi Daily Journal found that nearly two-thirds of these county-level courts obstructed public access to basic information about law enforcement’s execution of search warrants. This blockade not only appeared to violate state rules on court access; it frustrated the public’s ability to scrutinize when police officers raid someone’s home without knocking and announcing themselves.

The good news is that the Daily Journal is pushing back. It filed suit in the justice court in Union County, Miss., and asked for an end to the practice of never making search-warrant materials public.

Mississippi courts are unfortunately not alone in their efforts to keep search warrant records secret. The San Bernardino Superior Court of California sought to keep secret search warrants used to engage in invasive digital surveillance, only disclosing most of them after the EFF sued.

It’s My Party and I Can Hide Records If I Want to Award: Wyoming Department of Education

Does the public really have a right to know if their tax dollars pay for a private political event?

Former Superintendent of Public Instruction Brian Schroeder and Chief Communications Officer Linda Finnerty in the Wyoming Department of Education didn’t seem to think so, according to Laramie County Judge Steven Sharpe.

Sharpe, in his order requiring disclosure of the records, wrote that the two were more concerned with “covering the agency’s tracks” and acted in “bad faith” in complying with Wyoming’s state open records law.

The lawsuit proved that Schroeder originally used public money for a "Stop the Sexualization of Our Children" event and provided misleading statements to the plaintiffs about the source of funding for the private, pro-book-banning event.

The former superintendent had also failed to provide texts and emails sent via personal devices that were related to the planning of the event, ignoring the advice of the state’s attorneys. Instead, Schroeder decided to “shop around” for legal advice and listen to a friend, private attorney Drake Hill, who told him to not provide his cell phone for inspection.

Meanwhile, Finnerty and the Wyoming Department of Education “did not attempt to locate financial documents responsive to plaintiffs’ request, even though Finnerty knew or certainly should have known such records existed.”

Transparency won this round with the disclosure of more than 1,500 text messages and emails—and according to Sharpe, the incident established a legal precedent on Wyoming public records access.

The Fee-l the Burn Award: Baltimore Police Department

In 2020, Open Justice Baltimore sued the Baltimore Police Department over the agency's demand that the nonprofit watchdog group pay more than $1 million to obtain copies of use-of-force investigation files. 

The police department had decreased their assessment to $245,000 by the time of the lawsuit, but it rejected the nonprofit’s fee waiver, questioning the public interest in the records and where they would change the public's understanding of the issue. The agency also claimed that fulfilling the request would be costly and burdensome for its short-staffed police department.

In 2023, Maryland’s Supreme Court issued a sizzling decision criticizing the BPD’s $245,000 fee assessment and its refusal to waive that fee in the name of public interest. The Supreme Court found that the public interest in how the department polices itself was clear and that the department should have considered how a denial of the fee waiver would “exacerbate the public controversy” and further “the perception that BPD has something to hide.”

The Supreme Court called BPD’s fee assessment “arbitrary and capricious” and remanded the case back to the police department, which must now reconsider the fee waiver. The unanimous decision from the state’s highest court did not mince its words on the cost of public records, either: “While an official custodian’s discretion in these matters is broad,” the opinion reads, “it is not boundless.”

The Continuing Failure Award: United States Citizenship and Immigration Services

Alien registration files, also commonly known as “A-Files,” contain crucial information about a non-citizen’s interaction with immigration agencies, and are central to determining eligibility for immigration benefits.

However, U.S. immigration agencies have routinely failed to release alien files within the statutory time limit for responding, according to Nightingale et al v. U.S. Citizenship and Immigration Services et al, a class-action lawsuit by a group of immigration attorneys and individual requesters.

The attorneys filed suit in 2019 against the U.S. Citizenship and Immigration Services, the Department of Homeland Security and U.S. Immigration and Customs Enforcement. In 2020, Judge William H. Orrick ruled that the agencies must respond to FOIA requests within 20 business days, and provide the court and class counsel with quarterly compliance reports. The case remains open.

With U.S. immigration courts containing a backlog of more than 2 million cases as of October of last year, according to the U.S. Government Accountability Office, the path to citizenship is bogged down for many applicants. The failure of immigration agencies to comply with statutory deadlines for requests only makes navigating the immigration system even more challenging. There is reason for hope for applicants, however. In 2022, Attorney General Merrick Garland made it federal policy to not require FOIA requests for copies of immigration proceedings, instead encouraging agencies to make records more readily accessible through other means.

Even the A-File backlog itself is improving. In the last status report, filed by the Department of Justice, they wrote that “of the approximately 119,140 new A-File requests received in the current reporting period, approximately 82,582 were completed, and approximately 81,980 were timely completed.”

The Creative Invoicing Award: Richmond, Va., Police Department
A redacted document with an expensive price tag attached.

Some agencies claim outrageous fees for redacting documents to deter public access.

OpenOversightVA requested copies of general procedures—the basic outline of how police departments run—from localities across Virginia. While many departments either publicly posted them or provided them at no charge, Richmond Police responded with a $7,873.14 invoice. That’s $52.14 an hour to spend one hour on “review, and, if necessary, redaction” on each of the department’s 151 procedures.

This Foilies “winner” was chosen because of the wide gap between how available the information should be, and the staggering cost to bring it out of the file cabinet.

As MuckRock’s agency tracking shows, this is hardly an aberration for the agency. But this estimated invoice came not long after the department’s tear-gassing of protesters in 2020 cost the city almost $700,000. At a time when other departments are opening their most basic rulebooks (in California, for example, every law enforcement agency is required to post these policy manuals online), Richmond has been caught attempting to use a simple FOIA request as a cash cow.

The Foilies (Creative Commons Attribution License) were compiled by the Electronic Frontier Foundation (Director of Investigations Dave Maass, Senior Staff Attorney Aaron Mackey, Legal Fellow Brendan Gilligan, Investigative Researcher Beryl Lipton) and MuckRock (Co-Founder Michael Morisy, Data Reporter Dillon Bergin, Engagement Journalist Kelly Kauffman, and Contributor Tom Nash), with further review and editing by Shawn Musgrave. Illustrations are by EFF Designer Hannah Diaz. The Foilies are published in partnership with the Association of Alternative Newsmedia. 

Don’t Fall for the Latest Changes to the Dangerous Kids Online Safety Act 

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version this week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not like some lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers’ Privacy and Free Expression

Par : Aaron Mackey
4 janvier 2024 à 13:41

As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone’s viewing history, EFF argued in court last month.

For decades, the Video Privacy Protection Act (VPPA) has safeguarded people’s viewing habits by generally requiring services that offer videos to the public to get their customers’ written consent before disclosing that information to the government or a private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users’ viewing habits, too.

The VPPA, however, is under attack by Patreon. That service for content creators and viewers is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook.

Patreon argues that even if it did violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth. This doctrine asks whether a substantial number of the challenged law’s applications violate the First Amendment, judged in relation to the law’s plainly legitimate sweep.  Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of the Communications Decency Act’s (CDA) online speech restrictions because it placed an “unacceptably heavy burden on protected speech.”

EFF is second to none in fighting for everyone’s First Amendment rights in court, including internet users (in Reno mentioned above) and the companies that host our speech online. But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who benefit from the VPPA’s protections.

As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon’s argument is wrong because the VPPA directly advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance.

“The VPPA provides Americans with critical, private space to view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion,” we wrote. “That breathing room is often a catalyst for people’s free expression.”

As the brief recounts, courts have protected against government efforts to learn people’s book buying and library history, and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people’s ability to privately consume media advances the First Amendment’s purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people’s video viewing habits are intensely private, because the data can reveal intimate details about our personalities, politics, religious beliefs, and values.

Patreon’s First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, “[t]he VPPA’s purpose, application, and enforcement is overwhelmingly focused on regulating the disclosure of a person’s video viewing history in the course of a commercial transaction between the provider and user.” In other words, the legitimate sweep of the VPPA does not violate the First Amendment because generally there is no public interest in disclosing any one person’s video viewing habits that a company learns purely because it is in the business of selling video access to the public.

There is a better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF’s brief explains, should a video provider face liability under the VPPA for disclosing a customer’s video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern.

Indeed, courts have recognized that certain applications of privacy laws, such as the Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them entirely.

“In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course,” EFF wrote. “This approach accommodates the broad and legitimate sweep of those privacy protections while vindicating speakers’ First Amendment rights.”

Patreon's argument would see the VPPA gutted—an enormous loss for privacy and free expression for the public. The court should protect against the disclosure of everyone’s viewing history and protect the VPPA.

You can read our brief here.

Victory! Police Drone Footage is Not Categorically Exempt From California’s Public Records Law

Par : Aaron Mackey
3 janvier 2024 à 13:20

Video footage captured by police drones sent in response to 911 calls cannot be kept entirely secret from the public, a California appellate court ruled last week.

The decision by the California Court of Appeal for the Fourth District came after a journalist sought access to videos created by Chula Vista Police Department’s “Drones as First Responders” (DFR) program. The police department is the first law enforcement agency in the country to use drones to respond to emergency calls, and several other agencies across the U.S. have since adopted similar models.

After the journalist, Arturo Castañares of La Prensa, sued, the trial court ruled that Chula Vista police could withhold all footage because the videos were exempt from disclosure as law enforcement investigatory records under the California Public Records Act. Castañares appealed.

EFF, along with the First Amendment Coalition and the Reporters Committee for Freedom of the Press, filed a friend-of-the-court brief in support of Castañares, arguing that categorically excluding all drone footage from public disclosure could have troubling consequences on the public’s ability to understand and oversee the police drone program.

Drones, also called unmanned aerial vehicles (UAVs) or unmanned aerial systems (UAS), are relatively inexpensive devices that police use to remotely surveil areas. Historically, law enforcement have used small systems, such as quadrotors, for situational awareness during emergency situations, for capturing crime scene footage, or for monitoring public gatherings, such as parades and protests. DFR programs represent a fundamental change in strategy, with police responding to a much, much larger number of situations with drones, resulting in pervasive, if not persistent surveillance of communities.

Because drones raise distinct privacy and free expression concerns, foreclosing public access to their footage would make it difficult to assess whether police are following their own rules about when and whether they record sensitive places, such as people’s homes or public protests.

The appellate court agreed that drone footage is not categorically exempt from public disclosure. In reversing the trial court’s decision, the California Court of Appeal ruled that although some 911 calls are likely part of law enforcement investigation or at least are used to determine whether a crime occurred, not all 911 calls involve crimes.

“For example, a 911 call about a mountain lion roaming a neighborhood, a water leak, or a stranded motorist on the freeway could warrant the use of a drone but do not suggest a crime might have been committed or is in the process of being committed,” the court wrote.

Because it’s possible that some of Chula Vista’s drone footage involves scenarios in which no crime is committed or suspected, the police department cannot categorically withhold every moment of video footage from the public.

The appellate court sent the case back to the trial court and ordered it and the police department to take a more nuanced approach to determine whether the underlying call for service was a crime or was an initial investigation into a potential crime.

“The drone video footage should not be treated as a monolith, but rather, it can be divided into separate parts corresponding to each specific call,” the court wrote. “Then each distinct video can be evaluated under the CPRA in relation to the call triggering the drone dispatch.”

This victory sends a message to other agencies in California adopting copycat programs, such as the Beverly Hills Police Department, Irvine Police Department, and Fremont Police Department, that they can’t abuse public records laws to shield every second of drone footage from public scrutiny.

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

The U.S. Supreme Court’s Busy Year of Free Speech and Tech Cases: 2023 Year in Review

The U.S. Supreme Court has taken an unusually active interest in internet free speech issues. EFF participated as amicus in a whopping nine cases before the court this year. The court decided four of those cases, and decisions in the remaining five cases will be published in 2024.   

Of the four cases decided this year, the results are a mixed bag. The court showed restraint and respect for free speech rights when considering whether social media platforms should be liable for ISIS content, while also avoiding gutting one of the key laws supporting free speech online. The court also heightened protections for speech that may rise to the level of criminal “true threats.” But the court declined to overturn an overbroad law that relates to speech about immigration.  

Next year, we’re hopeful that the court will uphold the right of individuals to comment on government officials’ social media pages, when those pages are largely used for governmental purposes and even when the officials don’t like what those comments say; and that the court will strike down government overreach in mandating what content must stay up or come down online, or otherwise distorting social media editorial decisions. 

Platform Liability for Violent Extremist Content 

Cases: Gonzalez v. Google and Twitter v. Taamneh – DECIDED 

The court, in two similar cases, declined to hold social media companies—YouTube and Twitter—responsible for aiding and abetting terrorist violence allegedly caused by user-generated content posted to the platforms. The case against YouTube (Google) was particularly concerning because the plaintiffs had asked the court to narrow the scope of Section 230 when internet intermediaries recommend third-party content. As we’ve said for decades, Section 230 is one of the most important laws for protecting internet users’ speech. We argued in our brief that narrowing Section 230, the law that generally protects users and online services from lawsuits based on content created by others, in any way would lead to increased censorship and a degraded online experience for users; as would holding platforms responsible for aiding and abetting acts of terrorism. Thankfully, the court declined to address the scope of Section 230 and held that the online platforms may not generally be held liable under the Anti-Terrorism Act. 

True Threats Online 

Case: Counterman v. Colorado – DECIDED 

The court considered what state of mind a speaker must have to lose First Amendment protection and be liable for uttering “true threats,” in a case involving Facebook messages that led to the defendant’s conviction. The issue before the court was whether any time the government seeks to prosecute someone for threatening violence against another person, it must prove that the speaker had some subjective intent to threaten the victim, or whether the government need only prove, objectively, that a reasonable person would have known that their speech would be perceived as a threat. We urged the court to require some level of subjective intent to threaten before an individual’s speech can be considered a "true threat" not protected by the First Amendment. In our highly digitized society, online speech like posts, messages, and emails, can be taken out of context, repackaged in ways that distort or completely lose their meaning, and spread far beyond the intended recipients. This higher standard is thus needed to protect speech such as humor, art, misunderstandings, satire, and misrepresentations. The court largely agreed and held that subjective understanding by the defendant is required: that, at minimum, the speaker was in fact subjectively aware of the serious risk that the recipient of the statements would regard their speech as a threat, but recklessly made them anyway.  

Encouraging Illegal Immigration  

Case: U.S. v. Hansen - DECIDED  

The court upheld the Encouragement Provision that makes it a federal crime to “encourage or induce” an undocumented immigrant to “reside” in the United States, if one knows that such “coming to, entry, or residence” in the U.S. will be in violation of the law. We urged the court to uphold the Ninth Circuit’s ruling, which found that the language is unconstitutionally overbroad under the First Amendment because it threatens an enormous amount of protected online speech. This includes prohibiting, for example, encouraging an undocumented immigrant to take shelter during a natural disaster, advising an undocumented immigrant about available social services, or even providing noncitizens with Know Your Rights resources or certain other forms of legal advice. Although the court declined to hold the law unconstitutional, it sharply narrowed the law’s impact on free speech, ruling that the Encouragement Provision applies only to the intentional solicitation or facilitation of immigration law violations. 

Public Officials Censoring Social Media Comments 

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – PENDING 

The court is considering a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The First Amendment generally prohibits viewpoint-based discrimination in government forums open to speech by members of the public. The threshold question in these cases is what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions. We argued that the court should establish a functional test that looks at how an account is actually used. It is important that the court make clear once and for all that public officials using social media in furtherance of their official duties can’t sidestep their First Amendment obligations because they’re using nominally “personal” or preexisting campaign accounts. 

Government Mandates for Platforms to Carry Certain Online Speech 

Cases: NetChoice v. Paxton and Moody v. NetChoice - PENDING 

The court will hear arguments this spring about whether laws in Florida and Texas violate the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. Although the state laws differ in how they operate and the type of mandates they impose, each law represents a profound intrusion into social media sites’ ability to decide for themselves what speech they will publish and how they will present it to users. As we argued in urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs. To be sure, internet users are rightly frustrated with social media services’ content moderation practices, which are often perplexing and mistaken. But permitting Florida and Texas to deploy the state’s coercive power in retaliation for those concerns raises significant First Amendment and human rights concerns. 

Government Coercion in Content Moderation 

Case: Murthy v. Missouri – PENDING 

Last, but certainly not least, the court is considering the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech. But the court has not previously applied this principle to government communications with social media sites about user posts. We urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible. We also urged the court to make clear that courts reviewing claims of impermissible government involvement in content moderation are obligated to conduct fact and context-specific inquires. And we argued that close cases should go against the government, as it is the best positioned to ensure that its involvement in platforms’ policy enforcement decisions remains permissible. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Victory! Montana’s Unprecedented TikTok Ban is Unconstitutional

Par : Aaron Mackey
1 décembre 2023 à 17:33

A federal court on Thursday blocked Montana’s effort to ban TikTok from the state, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. 

Montana passed a law in May that prohibited TikTok from operating anywhere within the state and imposed $10,000 penalties on TikTok or any mobile application store that allowed users to access TikTok. The law was scheduled to take effect in January. EFF opposed enactment of this law, along with ACLU, CDT, and others. 

In issuing a preliminary injunction, the district court rejected the state’s claim that it had a legitimate interest in banning the popular video sharing application because TikTok is owned by a Chinese company. And although Montana has an interest in protecting minors from harmful content and protecting consumers’ privacy, the law’s total ban was not narrowly tailored to address the state’s concerns.

“SB 419 bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” the court wrote. 

EFF and the ACLU filed a friend-of-the-court brief in support of the challenge, brought by TikTok and a group of the app’s users who live in Montana. The brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court blocked the law from going into effect. 

The district court agreed that Montana’s statute violated the First Amendment. Although the court declined to decide whether the law was subject to heightened review under the Constitution (known as strict scrutiny), it ruled that Montana’s banning of TikTok failed to satisfy even less-searching review known as intermediate scrutiny.

“Ultimately, if Montana’s interest in consumer protection and protecting minors is to be carried out through legislation, the method sought to achieve those ends here was not narrowly tailored,” the court wrote.

The court’s decision this week joins a growing list of cases in which judges have halted state laws that unconstitutionally burden internet users’ First Amendment rights in the name of consumer privacy or child protection.

As EFF has said repeatedly, state lawmakers are right to be concerned about online services collecting massive volumes of their residents’ private data. But lawmakers should address those concerns directly by enacting comprehensive consumer data privacy laws, rather than seeking to ban those services entirely or prevent children from accessing them. Consumer data privacy laws both directly address lawmakers’ concerns and do not raise the First Amendment issues that lead to courts invalidating laws like Montana’s.

Protecting Kids on Social Media Act: Amended and Still Problematic

Senators who believe that children and teens must be shielded from social media have updated the problematic Protecting Kids on Social Media Act, though it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition.  

As we wrote in August, the original bill (S. 1291) contained a host of problems. A recent draft of the amended bill gets rid of some of the most flagrantly unconstitutional provisions: It no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media. 

However, the amended bill is still rife with issues.   

The biggest is that it prohibits children under 13 from using any ad-based social media. Though many social media platforms do require users to be over 13 to join (primarily to avoid liability under COPPA), some platforms designed for young people do not.  Most platforms designed for young people are not ad-based, but there is no reason that young people should be barred entirely from a thoughtful, cautious platform that is designed for children, but which also relies on contextual ads. Were this bill made law, ad-based platforms may switch to a fee-based model, limiting access only to young people who can afford the fee. Banning children under 13 from having social media accounts is a massive overreach that takes authority away from parents and infringes on the First Amendment rights of minors.  

The vast majority of content on social media is lawful speech fully protected by the First Amendment. Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media. At the same time, parents have a right to oversee their children’s online activities. But the First Amendment forbids Congress from making a freewheeling determination that children can be blocked from accessing lawful speech. The Supreme Court has ruled that there is no children’s exception to the First Amendment.   

Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media.

Perhaps recognizing this, the amended bill includes a caveat that children may still view publicly available social media content that is not behind a login, or through someone else’s account (for example, a parent’s account). But this does not help the bill. Because the caveat is essentially a giant loophole that will allow children to evade the bill’s prohibition, it raises legitimate questions about whether the sponsors are serious about trying to address the purported harms they believe exist anytime minors access social media. As the Supreme Court wrote in striking down a California law aimed at restricting minors’ access to violent video games, a law that is so “wildly underinclusive … raises serious doubts about whether the government is in fact pursuing the interest it invokes….” If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

Another problem: The amended bill employs a new standard for determining whether platforms know the age of users: “[a] social media platform shall not permit an individual to create or maintain an account if it has actual knowledge or knowledge fairly implied on the basis of objective circumstances that the individual is a child [under 13].” As explained below, this may still force online platforms to engage in some form of age verification for all their users. 

While this standard comes from FTC regulatory authority, the amended bill attempts to define it for the social media context. The amended bill directs courts, when determining whether a social media company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor, to consider “competent and reliable empirical evidence, taking into account the totality of the circumstances, including whether the operator, using available technology, exercised reasonable care.” But, according to the amended bill, “reasonable care” is not meant to mandate “age gating or age verification,” the collection of “any personal data with respect to the age of users that the operator is not already collecting in the normal course of business,” the viewing of “users’ private messages” or the breaking of encryption. 

While these exclusions provide superficial comfort, the reality is that companies will take the path of least resistance and will be incentivized to implement age gating and/or age verification, which we’ve raised concerns about many times over. This bait-and-switch tactic is not new in bills that aim to protect young people online. Legislators, aware that age verification requirements will likely be struck down, are explicit that the bills do not require age verification. Then, they write a requirement that would lead most companies to implement age verification or else face liability.  

If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

In practice, it’s not clear how a court is expected to determine whether a company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor in the event of an enforcement action. In this case, while the lack of age gating/age verification mechanisms may not be proof that a company failed to exercise reasonable care in letting a child under 13 use the site,; the use of age gating/age verification tools to deny children under 13 the ability to use a social media site will surely be an acceptable way to avoid liability. Moreover, without more guidance, this standard of “reasonable care” is quite vague, which poses additional First Amendment and due process problems. 

Finally, although the bill no longer creates a digital ID pilot program for age verification, it still tries to push the issue forward. The amended bill orders a study and report looking at “current available technology and technologically feasible methods and options for developing and deploying systems to provide secure digital identification credentials; and systems to verify age at the device and operating system level.” But any consideration of digital identification for age verification is dangerous, given the risk of sliding down the slippery slope toward a national ID that is used for many more things than age verification and that threatens individual privacy and civil liberties. 

❌
❌