Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online

Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users. 

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform. 

The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

 The bill doesn’t even require that the impact be a negative one. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

EFF to Fifth Circuit: Age Verification Laws Will Hurt More Than They Help

EFF, along with the ACLU and the ACLU of Mississippi, filed an amicus brief on Thursday asking a federal appellate court to continue to block Mississippi’s HB 1126—a bill that imposes age verification mandates on social media services across the internet.

Our friend-of-the-court brief, filed in the U.S. Court of Appeals for the Fifth Circuit, argues that HB 1126 is “an extraordinary censorship law that violates all internet users’ First Amendment rights to speak and to access protected speech” online.

HB 1126 forces social media sites to verify the age of every user and requires minors to get explicit parental consent before accessing online spaces. It also pressures them to monitor and censor content  on broad, vaguely defined topics—many of which involve constitutionally protected speech. These sweeping provisions create significant barriers to the free and open internet and “force adults and minors alike to sacrifice anonymity, privacy, and security to engage in protected online expression.” A federal district court already prevented HB 1126 from going into effect, ruling that it likely violated the First Amendment.

Blocking Minors from Vital Online Spaces

At the heart of our opposition to HB 1126 is its dangerous impact on young people’s free expression. Minors enjoy the same First Amendment right as adults to access and engage in protected speech online.

“No legal authority permits lawmakers to burden adults’ access to political, religious, educational, and artistic speech with restrictive age-verification regimes out of a concern for what minors might see. Nor is there any legal authority that permits lawmakers to block minors categorically from engaging in protected expression on general purpose internet sites like those regulated by HB 1126.”

Social media sites are not just entertainment hubs; they are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics. As our brief explains, minors’ access to these online spaces “is essential to their growth into productive members of adult society because it helps them develop their own ideas, learn to express themselves, and engage productively with others in our democratic public sphere.” 

Social media also “enables individuals whose voices would otherwise not be heard to make vital and even lifesaving connections with one another, and to share their unique perspectives more widely.” LGBTQ+ youth, for example, turn to social media for community, exploration, and support, while others find help in forums that discuss mental health, disability, eating disorders, or domestic violence.

HB 1126’s age-verification regime places unnecessary barriers between young people and these crucial resources. The law compels platforms to broadly restrict minors’ access to a vague list of topics—the majority of which concern constitutionally protected speech—that Mississippi deems “harmful” for minors.

First Amendment Rights: Protection for All

The impact of HB 1126 is not limited to minors—it also places unnecessary and unconstitutional restrictions on adults’ speech. The law requires all users to verify their age before accessing social media, which could entirely block access for the millions of U.S. adults who lack government-issued ID. Should a person who takes public transit every day need to get a driver’s license just to get online? Would you want everything you do online to be linked to your government-issued ID?

HB 1126 also strips away users’ protected right to online anonymity, leaving them vulnerable to exposure and harassment and chilling them from speaking freely on social media. As our brief recounts, the vast majority of internet users have taken steps to minimize their digital footprints and even to “avoid observation by specific people, organizations, or the government.”

“By forcibly tying internet users’ online interactions to their real-world identities, HB 1126 will chill their ability to engage in dissent, discuss sensitive, personal, controversial, or stigmatized content, or seek help from online communities.”

Online Age Verification: A Privacy Nightmare

Finally, HB 1126 forces social media sites to collect users’ most sensitive and immutable data, turning them into prime targets for hackers. In an era where data breaches and identity theft are alarmingly common, HB 1126 puts every user’s personal data at risk. Furthermore, the process of age verification often involves third-party services that profit from collecting and selling user data. This means that the sensitive personal information on your ID—such as your name, home address, and date of birth—could be shared with a web of data brokers, advertisers, and other intermediary entities.

“Under the plain language of HB 1126, those intermediaries are not required to delete users’ identifying data and, unlike the online service providers themselves, they are also not restricted from sharing, disclosing, or selling that sensitive data. Indeed, the incentives are the opposite: to share the data widely.”

No one—neither minors nor adults—should have to sacrifice their privacy or anonymity in order to exercise their free speech rights online.

Courts Continue To Block Laws Like Mississippi’s

Online age verification laws like HB 1126 are not new, and courts across the country have consistently ruled them unconstitutional. In cases from Arkansas to Ohio to Utah, courts have struck down similar online age-verification mandates because they burden users’ access to, and ability to engage with, protected speech.

While Mississippi may have a legitimate interest in protecting children from harm, as the Supreme Court has held, “that does not include a free-floating power to restrict the ideas to which children may be exposed.” By imposing age verification requirements on all users, laws like HB 1126 undermine the First Amendment rights of both minors and adults, pose serious privacy and security risks, and chill users from accessing one of the most powerful expressive mediums of our time.

For these reasons, we urge the Fifth Circuit to follow suit and continue to block Mississippi HB 1126.

The U.S. House Version of KOSA: Still a Censorship Bill

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

EFF Urges Second Circuit to Affirm Injunction of New York’s Dangerous Online “Hateful Conduct” Law

EFF, along with the ACLU, urged the U.S. Court of Appeals for the Second Circuit to find a New York statute that compels platforms to moderate online speech that falls within the state’s particular definition of “hateful conduct” unconstitutional.

The statute itself requires covered social media platforms to develop a mechanism that allows users to report incidents of “hateful conduct” (as defined by the state), and to publish a policy detailing how the platform will address such incidents in direct responses provided to each individual complainant. Noncompliance with the statute is enforceable through Attorney General investigations, subpoenas, and daily fines of $1000 per violation. The statute is part of a broader scheme by New York officials, including the Governor and the Attorney General, to unlawfully coerce online platforms into censoring speech that the state deems “hateful.”

The bill was rushed through the New York legislature in the aftermath of last year’s tragic mass shooting at a Buffalo, NY supermarket. At the same time, the state launched an investigation into social media platforms’ “civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” In the months that followed, state officials alleged that it was their perceived “lack of oversight, transparency, and accountability” over social media platforms’ content moderation policies that had caused such “dangerous and corrosive ideas to spread,” and held up this “hateful conduct” law as the regulatory solution to online hate speech. And, when the investigation into such platform liability concluded, Attorney General Letitia James called for platforms to be held accountable and threatened to push for measures that would ensure they take “reasonable steps to prevent unlawful violent criminal content from appearing on their platforms.”

EFF and ACLU filed a friend-of-the-court brief in support of the plaintiffs: Eugene Volokh, a First Amendment scholar who runs the legal blog Volokh Conspiracy, the video sharing site Rumble, and the social media site Local. In the brief we urged the court to affirm the trial court’s preliminary injunction of the law. As we have explained many times before, any government involvement in online intermediaries’ content moderation processes—regardless of the form or degree—raises serious First Amendment and broader human rights concerns.

Despite the New York officials’ seemingly good intention here, there are several problems with this law.

First, the law broadly defines “hateful conduct” as the “use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons,” a definition that could encompass a broad range of speech not typically considered “hate speech.” 

Next, the bill unconstitutionally compels platforms’ speech by forcing them to replace their own editorial policies with the state’s. Social media platforms and other online intermediaries subject to this bill have a long-protected First Amendment right to curate the speech that others publish on their sites—regardless of whether they curate a lot or a little, and regardless of whether their editorial philosophy is readily discernible or consistently applied. Here, by requiring publishers to develop, publish, and enforce an editorial standard at all—much less one that must adopt the state’s view of “hateful conduct”—this statute unlawfully compels speech and chills platforms’ First Amendment-protected exercise of editorial freedom.

Finally, the thinly veiled threats from officials designed to coerce websites to adopt the state’s editorial position is unconstitutional coercion.

We agree that many internet users want the online platforms they use to moderate certain hateful speech; but those decisions must be made by the platforms themselves, not the government. Platforms’ editorial freedom is staunchly protected by the First Amendment; to allow government to manipulate social media curation for its own purposes threatens fundamental freedoms. Therefore, to protect our online spaces, we must strictly scrutinize all government attempts to co-opt platforms’ content moderation policies—whether by preventing moderation, as in Texas and Florida, or by compelling moderation, as New York has done here.

❌