Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

Par : David Greene
22 juillet 2024 à 15:34

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue. 

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo 

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

Victory! Supreme Court Rules Platforms Have First Amendment Right to Decide What Speech to Carry, Free of State Mandates

Par : David Greene
1 juillet 2024 à 11:31

The Supreme Court today correctly found that social media platforms, like newspapers, bookstores, and art galleries before them, have First Amendment rights to curate and edit the speech of others they deliver to their users, and the government has a very limited role in dictating what social media platforms must and must not publish. Although users remain understandably frustrated with how the large platforms moderate user speech, the best deal for users is when platforms make these decisions instead of the government.  

As we explained in our amicus brief, users are far better off when publishers make editorial decisions free from government mandates. Although the court did not reach a final determination about the Texas and Florida laws, it confirmed that their core provisions are inconsistent with the First Amendment when they force social media sites to publish user posts that are, at best, irrelevant, and, at worst, false, abusive, or harassing. The government’s favored speakers would be granted special access to the platforms, and the government’s disfavored speakers silenced. 

We filed our first brief advocating this position in 2018 and are pleased to see that the Supreme Court has finally agreed. 

Notably, the court emphasizes another point EFF has consistently made: that the First Amendment right to edit and curate user content does not immunize social media platforms and tech companies more broadly from other forms of regulation not related to editorial policy. As the court wrote: “Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects.” The court specifically calls out competition law as one avenue to address problems related to market dominance and lack of user choice. Although not mentioned in the court’s opinion, consumer privacy laws are another available regulatory tool.  

We will continue to urge platforms large and small to adopt the Santa Clara Principles as a human rights framework for content moderation. Further, we will continue to advocate for strong consumer data privacy laws to regulate social media companies’ invasive practices, as well as more robust competition laws that could end the major platforms’ dominance.   

EFF has been urging courts to adopt this position for almost six years. We filed our first amicus brief in November 2018: https://www.eff.org/document/prager-university-v-google-eff-amicus-brief  

EFF’s must-carry laws issue page: https://www.eff.org/cases/netchoice-must-carry-litigation 

Press release for our SCOTUS amicus brief: https://www.eff.org/press/releases/landmark-battle-over-free-speech-eff-urges-supreme-court-strike-down-texas-and 

Direct link to our brief: https://www.eff.org/document/eff-brief-moodyvnetchoice

What’s the Difference Between Mastodon, Bluesky, and Threads?

The ongoing Twitter exodus sparked life into a new way of doing social media. Instead of a handful of platforms trying to control your life online, people are reclaiming control by building more open and empowering approaches to social media. Some of these you may have heard of: Mastodon, Bluesky, and Threads. Each is distinct, but their differences can be hard to understand as they’re rooted in their different technical approaches. 

The mainstream social web arguably became “five websites, each consisting of screenshots of text from the other four,”  but in just the last few years radical and controversial changes to major platforms were a wake up call to many and are driving people to seek alternatives to the billionaire-driven monocultures.

Two major ecosystems have emerged in the wake, both encouraging the variety and experimentation of the earlier web. The first, built on ActivityPub protocol, is called the Fediverse. While it includes many different kinds of websites, Mastodon and Threads have taken off as alternatives for Twitter that use this protocol. The other is the AT Protocol, powering the Twitter alternative Bluesky.

These protocols, a shared language between computer systems, allow websites to exchange information. It’s a simple concept you’re benefiting from right now, as protocols enable you to read this post in your choice of app or browser. Opening this freedom to social media has a huge impact, letting everyone send and receive posts their own preferred way. Even better, these systems are open to experiment and can cater to every niche, while still connecting to everyone in the wider network. You can leave the dead malls of platform capitalism, and find the services which cater to you.

To save you some trial and error, we have outlined some differences between these options and what that might mean for them down the road.

ActivityPub and AT Protocols

ActivityPub

The Fediverse goes a bit further back,  but ActivityPub’s development by the world wide web consortium (W3C) started in 2014. The W3C is a public-interest non-profit organization which has played a vital role in developing open international standards which define the internet, like HTML and CSS (for better or worse). Their commitment to ActivityPub gives some assurance the protocol will be developed in a stable and ostensibly consensus driven process.

This protocol requires a host website (often called an “instance”) to maintain an “inbox” and “outbox” of content for all of its users, and selectively share this with other host websites on behalf of the users. In this federation model users are accountable to their instance, and instances are accountable to each other. Misbehaving users are banned from instances, and misbehaving instances are cut off from others through “defederation.” This creates some stakes for maintaining good behavior, for users and moderators alike.

ActivityPub handles a wide variety of uses, but the application most associated with the protocol is Mastodon. However, ActivityPub is also integral to Meta’s own Twitter alternative, Threads, which is taking small steps to connect with the Fediverse. Threads is a totally different application, solely hosted by Meta, and is ten times bigger than the Fediverse and Bluesky networks combinedmaking it the 500-pound gorilla in the room. Meta’s poor reputation on privacy, moderation, and censorship, has driven many Fediverse instances to vow they’ll defederate from Threads. Other instances still may connect with Threads to help users find a broader audience, and perhaps help sway Threads users to try Mastodon instead.

AT Protocol

The Authenticated Transfer (AT) Protocol is newer; sparked by Twitter co-founder Jack Dorsey in 2019. Like ActivityPub, it is also an open source protocol. However, it is developed unilaterally by a private for-profit corporation— Bluesky PBLLC— though it may be imparted to a web standards body in the future. Bluesky remains mostly centralized. While it has recently opened up to small hosts, there are still some restrictions preventing major alternatives from participating. As developers further loosens control we will likely see rapid changes in how people use the network.

The AT Protocol network design doesn’t put the same emphasis on individual hosts as the Fediverse does, and breaks up hosting, distribution, and curation into distinct services. It’s easiest to understand in comparison to traditional web hosting. Your information, like posts and profiles, are held in Personal Data Servers (PDSes)—analogous to the hosting of a personal website. This content is then fetched by relay servers, like web crawlers, which aggregate a “firehose” of everyone’s content without much alteration. To sort and filter this on behalf of the user, like a “search engine,” AT has Appview services, which give users control over what they see. When accessing the Appview through a client app or website, the user has many options to further filter, sort, and curate their feed, as well as “subscribe” to filters and labels someone else made.

The result is a decentralized system which can be highly tailored while still offering global reach. However, this atomized system also may mean the community accountability encouraged by the host-centered system may be missing, and users are ultimately responsible for their own experience and moderation. This will depend on how the network opens to major hosts other than the Bluesky corporation.

User Experience

Mastodon, Threads and Bluesky have a number of differences that are not essential to their underlying protocol which affect users looking to get involved today. Mastodon and Bluesky are very customizable, so these differences are just addressing the prevalent trends.

Timeline Algorithm

Most Mastodon and most ActivityPub sites prefer a more straightforward timeline of content from accounts you follow. Threads have a Meta-controlled algorithm, like Instagram. Bluesky defaults to a chronological feed, but opens algorithmic curation and filtering up to apps and users. 

User Design

All three services present a default appearance that will be familiar to anyone who has used Twitter. Both Mastodon and Bluesky have alternative clients with the only limit being a developer’s imagination. In fact, thanks to their open nature, projects like SkyBridge let users of one network use apps built for the other (in this case, Bluesky users using Mastodon apps). Threads does not have any alternate clients and requires a developer API, which is still in beta.

Onboarding 

Threads has the greatest advantage to getting people to sign up, as it has only one site which accepts an Instagram account as a login. Bluesky also has only one major option for signing up, but has some inherent flexibility in moving your account later on. That said, diving into a few extra setup steps can improve the experience. Finally, one could easily join Mastodon by joining the flagship instance, mastodon.social. However, given the importance of choosing the right instance, you may miss out on some of the benefits of the Fediverse and want to move your account later on. 

Culture

Threads has a reputation for being more brand-focused, with more commercial accounts and celebrities, and Meta has made no secret about their decisions to deemphasize political posts on the platform. Bluesky is often compared to early Twitter, with a casual tone and a focus on engaging with friends. Mastodon draws more people looking for community online, especially around shared interests, and each instance will have distinct norms.

Privacy Considerations

Neither ActivityPub nor AT Protocol currently support private end-to-end encrypted messages at this time, so they should not be used for sensitive information. For all services here, the majority of content on your profile will be accessible from the public web. That said, Mastodon, Threads, and Bluesky differ in how they handle user data.

Mastodon

Everything you do as a user is entrusted to the instance host including posts, interactions, DMs, settings, and more. This means the owner of your instance can access this information, and is responsible for defending it against attackers and law enforcement. Tech-savvy people may choose to self-host, but users generally need to find an instance run by someone they trust.

The Fediverse muffles content sharing through a myriad of permissions set by users and instances. If your instance blocks a poorly moderated instance for example, the people on that other site will no longer be in your timelines nor able to follow your posts. You can also limit how messages are shared to further reduce the intended audience. While this can create a sense of community and closeness,  remember it is still public and instance hosts are always part of the equation. Direct messages, for example, will be accessible to your host and the host of the recipient.

If content needs to be changed or deleted after being shared, your instance can request these changes, and this is often honored. That said, once something is shared to the network, it may be difficult to “undo.”

Threads

All user content is entrusted to one host, in this case Meta, with a privacy policy similar to Instagram. Meta determines when information is shared with law enforcement, how it is used for advertising, how well protected it is from a breach, and so on.

Sharing with instances works differently for Threads, as Meta has more restricted interoperability. Currently, content sharing is one-way: Threads users can opt-in to sharing their content with the Fediverse, but won’t see likes or replies. By the end of this year, they will allow Threads users to follow accounts on Mastodon accounts.

Federation on Threads may always be restricted, and features like transferring one's account to Mastodon may never be supported. Limits in sharing should not be confused with enhanced privacy or security, however. Public posts are just that—public—and you are still trusting your host (Meta) with private data like DMs (currently handled by Instagram). Instead these restrictions, should they persist, should be seen as the minimum level of control over users Meta deems necessary.

Bluesky

Bluesky, in contrast, is a very “loud” system. Every public message, interaction, follow and block is hosted by your PDS and freely shared to everyone in the network. Every public post is for everyone and is only discovered according to their own app and filter preferences. There are ways to algorithmically imitate smaller spaces with filtering and algorithmic feeds, such as with the Blacksky project, but these are open to everyone and your posts will not be restricted to that curated space.

Direct messages are limited to the flagship Bluesky app, and can be accessed by the Bluesky moderation team. The project plans to eventually incorporate DMs into the protocol, and include end-to-end-encryption, but it is not currently supported. Deletion on Bluesky is simply handled by removing the content from your PDS, but once a message is shared to Relay and Appview services it may remain in circulation a while longer according to their retention settings.

Moderation

Mastodon

Mastodon’s approach to moderation is often compared to subreddits, where the administrators of an instance are responsible for creating a set of rules and empowering a team of moderators to keep the community healthy. The result is a lot more variety in moderation experience, with the only boundary being an instance’s reputation in the broader Fediverse. Instances coordinating and “defederating” from problematic hosts has already been effective in the Fediverse. One former instance, Gab, was successfully cut off from the Fediverse for hosting extreme right-wing hate. The threat of defederation sets a baseline of behavior across the Fediverse, and from there users can choose instances based on reputation and on how aligned the hosts are with their own moderation preferences.

At its best, instances prioritize things other than growth. New members are welcomed and onboarded carefully as new community members, and hosts only grow the community if their moderation team can support it. Some instances even set a permanent cap on participation to a few thousand to ensure a quality and intimate experience. Current members too can vote with their feet, and if needed split off into their own new instance without needing to disconnect entirely.

While Mastodon has a lot going for it by giving users a choiceavoiding automation, and avoiding unsustainable growth, there are other evergreen moderation issues at play. Decisions can be arbitrary, inconsistent, and come with little recourse. These aren't just decisions impacting individual users, but also those affecting large swaths of them, when it comes to defederation. 

Threads

Threads, as alluded to when discussing privacy above, aims for a moderation approach more aligned with pre-2022 Twitter and Meta’s other current platforms like Instagram. That is, an impossible task of scaling moderation with endless growth of users.

As the largest of these services however, this puts Meta in a position to set norms around moderation as it enters the Fediverse. A challenge for decentralized projects will be to ensure Meta’s size doesn’t make them the ultimate authority on moderation decisions, a pattern of re-centralization we’ve seen happen in email. Spam detection tools have created an environment where email, though an open standard, is in practice dominated by Microsoft and Google as smaller services are frequently marked as spammers. A similar dynamic could play out with the federated social web, where Meta has capacity to exclude smaller instances with little recourse. Other instances may copy these decisions or fear not to do so, lest they are also excluded. 

Bluesky

While in beta, Bluesky received a lot of praise and criticism for its moderation. However, up until recently, all moderation was handled by the centralized Bluesky company—not throughout the distributed AT network. The true nature of moderation structure on the network is only now being tested.

AT Protocol relies on labeling services, aka “labelers”  for moderation. These special accounts using Bluesky’s Ozone tool labels posts with small pieces of metadata. You can also filter accounts with account block lists published by other users, a lot like the Block Together tool formerly available on Twitter. Your Appview aggregating your feed uses these labels to and block lists to filter content. Arbitrary and irreconcilable moderation decisions are still a problem, as are some of the risks of using automated moderation, but it is less impactful as users are not deplatformed and remain accessible to people with different moderation settings. This also means problematic users don’t go anywhere and can still follow you, they are just less visible.

The AT network is censorship resistant, and conversely, it is difficult to meaningfully ban users. To be propagated in the network one only needs a PDS to host their account, and at least one Relay to spread that information. Currently Relays sit out of moderation, only scanning to restrict CSAM. In theory Relays could be more like a Fediverse instance and more accurately curate and moderate users. Even then, as long as one Relay carries the user they will be part of the network. PDSes, much like web hosts, may also choose to remove controversial users, but even in those cases PDSes are easy to self-host even on a low-power computer.

Like the internet generally, removing content relies on the fragility of those targeted. With enough resources and support, a voice will remain online. Without user-driven approaches to limit or deplatform content (like defederation), Bluesky services may be targeted by censorship on the infrastructure level, like on the ISP level.

Hosting and Censorship

With any internet service, there are some legal obligations when hosting user generated content. No matter the size, hosts may need to contend with DMCA takedowns, warrants for user data, cyber attacks,  blocking from authoritarian regimes, and other pressures from powerful interests. This decentralized approach to social media also relies on a shared legal protection for all hosts, Section 230.  By ensuring they are not held liable for user-generated content, this law provides the legal protection necessary for these platforms to operate and innovate.

Given the differences in the size of hosts and their approach to moderation, it isn’t surprising that each of these platforms will address platform liability and censorship differently.

Mastodon

Instance hosts, even for small communities, need to navigate these legal considerations as we outlined in our Fediverse legal primer. We have already seen some old patterns reemerge with these smaller, and often hobbyist, hosts struggling to defend themselves from legal challenges and security threats. While larger hosts have resources to defend against these threats, an advantage of the decentralized model is censors need to play whack-a-mole in a large network where messages flow freely across the globe. Together, the Fediverse is set up to be quite good at keeping information safe from censorship, but individual users and accounts are very susceptible to targeted censorship efforts and will struggle with rebuilding their presence.

Threads

Threads is the easiest to address, as Meta is already several platforms deep into addressing liability and speech concerns, and have the resources to do so. Unlike Mastodon or Bluesky, they also need to do so on a much larger scale with a larger target on their back as the biggest platform backed by a multi-billion dollar company. The unique challenge for Threads however will be how Meta decides to handle content from the rest of the Fediverse. Threads users will also need to navigate the perks and pitfalls of sticking with a major host with a spotty track record on censorship and disinformation.

Bluesky

Bluesky is not yet tested beyond the flagship Bluesky services, and raises a lot more questions. PDSes, Relays and even Appviews play some role in hosting, and can be used with some redundancies. For example your account on one PDS may be targeted, but the system is designed to be easy for users to change this host, self-host, or have multiple hosts while retaining one identity on the network.

Relays, in contrast, are more computationally demanding and may remain the most “centralized” service as natural monopolies— users have some incentive to mostly follow the biggest relays. The result is a potential bottle-neck susceptible to influence and censorship. However, if we see a wide variety of relays with different incentives, it becomes more likely that messages can be shared throughout the network despite censorship attempts.

You Might Not Have to Choose

With this overview, you can start diving into one of these new Twitter alternatives leading the way in a more free social web. Thanks to the open nature of these new systems, where you set up will become less important with improved interoperability.

Both ActivityPub and AT Protocol developers are receptive to making the two better at communicating with one another, and independent projects like  Bridgy Fed, SkyBridge, RSS Parrot and Mastofeed are already letting users get the best of both worlds. Today a growing number of projects speak both protocols, along with older ones like RSS. It may be these paths towards a decentralized web become increasingly trivial as they converge, despite some early growing pains. Or the two may be eclipsed by yet another option. But their shared trajectory is moving us towards a more free, more open and refreshingly weird social web free of platform gatekeepers.

The Surgeon General's Fear-Mongering, Unconstitutional Effort to Label Social Media

Surgeon General Vivek Murthy’s extraordinarily misguided and speech-chilling call this week to label social media platforms as harmful to adolescents is shameful fear-mongering that lacks scientific evidence and turns the nation’s top physician into a censor. This claim is particularly alarming given the far more complex and nuanced picture that studies have drawn about how social media and young people’s mental health interact.

The Surgeon General’s suggestion that speech be labeled as dangerous is extraordinary. Communications platforms are not comparable to unsafe food, unsafe cars, or cigarettes, all of which are physical products—rather than communications platforms—that can cause physical injury. Government warnings on speech implicate our fundamental rights to speak, to receive information, and to think. Murthy’s effort will harm teens, not help them, and the announcement puts the surgeon general in the same category as censorial public officials like Anthony Comstock

There is no scientific consensus that social media is harmful to children's mental health. Social science shows that social media can help children overcome feelings of isolation and anxiety. This is particularly true for LBGTQ+ teens. EFF recently conducted a survey in which young people told us that online platforms are the safest spaces for them, where they can say the things they can't in real life ‘for fear of torment.’ They say these spaces have improved their mental health and given them a ‘haven’ to talk openly and safely. This comports with Pew Research findings that teens are more likely to report positive than negative experiences in their social media use. 

Additionally, Murthy’s effort to label social media creates significant First Amendment problems in its own right, as any government labeling effort would be compelled speech and courts are likely to strike it down.

Young people’s use of social media has been under attack for several years. Several states have recently introduced and enacted unconstitutional laws that would require age verification on social media platforms, effectively banning some young people from them. Congress is also debating several federal censorship bills, including the Kids Online Safety Act and the Kids Off Social Media Act, that would seriously impact young people’s ability to use social media platforms without censorship. Last year, Montana banned the video-sharing app TikTok, citing both its Chinese ownership and its interest in protecting minors from harmful content. That ban was struck down as unconstitutionally overbroad; despite that, Congress passed a similar federal law forcing TikTok’s owner, ByteDance, to divest the company or face a national ban.

Like Murthy, lawmakers pushing these regulations cherry-pick the research, nebulously citing social media’s impact on young people, and dismissing both positive aspects of platforms and the dangerous impact these laws have on all users of social media, adults and minors alike. 

We agree that social media is not perfect, and can have negative impacts on some users, regardless of age. But if Congress is serious about protecting children online, it should enact policies that promote choice in the marketplace and digital literacy. Most importantly, we need comprehensive privacy laws that protect all internet users from predatory data gathering and sales that target us for advertising and abuse.

The U.S. House Version of KOSA: Still a Censorship Bill

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

Analyzing KOSA’s Constitutional Problems In Depth 

Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

5 Questions to Ask Before Backing the TikTok Ban

With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk. 

The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions. 

1. Is the TikTok bill about privacy or content?

Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users? 

From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public."

This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”

2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws? 

It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.

It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.

3. If the TikTok bill is about content, how will it avoid violating the First Amendment? 

Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.

The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.

4. Is the TikTok bill a ban or something else? 

Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.” 

Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment. 

5. Does the U.S. support the free flow of information as a fundamental democratic principle? 

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.

In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”

Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Why U.S. House Members Opposed the TikTok Ban Bill

Par : Josh Richman
14 mars 2024 à 12:16

What do House Democrats like Alexandria Ocasio-Cortez and Barbara Lee have in common with House Republicans like Thomas Massie and Andy Biggs? Not a lot. But they do know an unconstitutional bill when they see one.

These and others on both sides of the aisle were among the 65 House Members who voted "no" yesterday on the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521, which would effectively ban TikTok. The bill now goes to the Senate, where we hope cooler heads will prevail in demanding comprehensive data privacy legislation instead of this attack on Americans' First Amendment rights.

We're saying plenty about this misguided, unfounded bill, and we want you to speak out about it too, but we thought you should see what some of the House Members who opposed it said, in their own words.

 

I am voting NO on the TikTok ban.

Rather than target one company in a rushed and secretive process, Congress should pass comprehensive data privacy protections and do a better job of informing the public of the threats these companies may pose to national security.

— Rep. Barbara Lee (@RepBarbaraLee) March 13, 2024

   ___________________ 

Today, I voted against the so-called “TikTok Bill.”

Here’s why: pic.twitter.com/Kbyh6hEhhj

— Rep Andy Biggs (@RepAndyBiggsAZ) March 13, 2024

   ___________________

Today, I voted against H.R. 7521. My full statement: pic.twitter.com/9QCFQ2yj5Q

— Rep. Nadler (@RepJerryNadler) March 13, 2024

   ___________________ 

Today I claimed 20 minutes in opposition to the TikTok ban bill, and yielded time to several likeminded colleagues.

This bill gives the President far too much authority to determine what Americans can see and do on the internet.

This is my closing statement, before I voted No. pic.twitter.com/xMxp9bU18t

— Thomas Massie (@RepThomasMassie) March 13, 2024

   ___________________ 

Why I voted no on the bill to potentially ban tik tok: pic.twitter.com/OGkfdxY8CR

— Jim Himes 🇺🇸🇺🇦 (@jahimes) March 13, 2024

   ___________________ 

I don’t use TikTok. I find it unwise to do so. But after careful review, I’m a no on this legislation.

This bill infringes on the First Amendment and grants undue power to the administrative state. pic.twitter.com/oSpmYhCrV8

— Rep. Dan Bishop (@RepDanBishop) March 13, 2024

   ___________________ 

I’m voting NO on the TikTok forced sale bill.

This bill was incredibly rushed, from committee to vote in 4 days, with little explanation.

There are serious antitrust and privacy questions here, and any national security concerns should be laid out to the public prior to a vote.

— Alexandria Ocasio-Cortez (@AOC) March 13, 2024

   ___________________ 

We should defend the free & open debate that our First Amendment protects. We should not take that power AWAY from the people & give it to the government. The answer to authoritarianism is NOT more authoritarianism. The answer to CCP-style propaganda is NOT CCP-style oppression. pic.twitter.com/z9HWgUSMpw

— Tom McClintock (@RepMcClintock) March 13, 2024

   ___________________ 

I'm voting no on the TikTok bill. Here's why:
1) It was rushed.
2) There's major free speech issues.
3) It would hurt small businesses.
4) America should be doing way more to protect data privacy & combatting misinformation online. Singling out one app isn't the answer.

— Rep. Jim McGovern (@RepMcGovern) March 13, 2024

    ___________________

Solve the correct problem.
Privacy.
Surveillance.
Content moderation.

Who owns #TikTok?
60% investors - including Americans
20% +7,000 employees - including Americans
20% founders
CEO & HQ Singapore
Data in Texas held by Oracle

What changes with ownership? I’ll be voting NO. pic.twitter.com/MrfROe02IS

— Warren Davidson 🇺🇸 (@WarrenDavidson) March 13, 2024

   ___________________ 

I voted no on the bill to force the sale of TikTok. Unlike our adversaries, we believe in freedom of speech and don’t ban social media platforms. Instead of this rushed bill, we need comprehensive data security legislation that protects all Americans.

— Val Hoyle (@RepValHoyle) March 13, 2024

    ___________________

Please tell the Senate to reject this bill and instead give Americans the comprehensive data privacy protections we so desperately need.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Congress Should Give Up on Unconstitutional TikTok Bans

Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week.

A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company.

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place.

Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target. 

The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now. 

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation

Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms.

Montana’s ban was as unprecedented as it was unconstitutional

And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana. 

Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

Taking Back the Web with Decentralization: 2023 in Review

31 décembre 2023 à 09:12

When a system becomes too tightly-controlled and centralized, the people being squeezed tend to push back to reclaim their lost autonomy. The internet is no exception. While the internet began as a loose affiliation of universities and government bodies, that emergent digital commons has been increasingly privatized and consolidated into a handful of walled gardens. Their names are too often made synonymous with the internet, as they fight for the data and eyeballs of their users.

In the past few years, there's been an accelerating swing back toward decentralization. Users are fed up with the concentration of power, and the prevalence of privacy and free expression violations, and many users are fleeing to smaller, independently operated projects.

This momentum wasn’t only seen in the growth of new social media projects. Other exciting projects have emerged this year, and public policy is adapting.  

Major gains for the Federated Social Web

After Elon Musk acquired Twitter (now X) at the end of 2022,  many people moved to various corners of the “IndieWeb” at an unprecedented rate. It turns out those were just the cracks before the dam burst this year. 2023 was defined as much by the ascent of federated microblogging as it was by the descent of X as a platform. These users didn't just want a drop-in replacement for twitter, they wanted to break the major social media platform model for good by forcing hosts to compete on service and respect.

The other major development in the fediverse came from a seemingly unlikely source—Meta.

This momentum at the start of the year was principally seen in the fediverse, with Mastodon. This software project filled the microblogging niche for users leaving Twitter, while conveniently being one of the most mature projects using the ActivityPub protocol, the basic building block at the heart of interoperability in the many fediverse services.

Filling a similar niche, but built on the privately developed Authenticated Transfer (AT) Protocol, Bluesky also saw rapid growth despite remaining invite-only and not-yet being open to interoperating until next year. Projects like Bridgy Fed are already working to connect Bluesky to the broader federated ecosystem, and show some promise of a future where we don’t have to choose between using the tools and sites we prefer and connecting to friends, family, and many others. 

The other major development in the fediverse came from a seemingly unlikely source—Meta.  Meta owns Facebook and Instagram, which have gone to great lengths to control user data—even invoking privacy-washing claims to maintain their walled gardens. So Meta’s launch of Threads in July, a new microblogging site using the fediverse’s ActivityPub protocol, was surprising. After an initial break-out success, thanks to bringing Instagram users into the new service, Threads is already many times larger than the fediverse and Bluesky combined. While such a large site could mean federated microblogging joins federated direct messages (email) in the mainstream, Threads has not yet interoperated, and may create a rift among hosts and users wary of Meta’s poor track record in protecting user privacy and content moderation

We also saw the federation of social news aggregation. In June, Reddit outraged its moderators and third party developers by updating its API pricing policy to become less interoperable. This outrage manifested into a major platform-wide blackout protesting the changes and the unfair treatment of the unpaid and passionate volunteers who make the site worthwhile. Again, users turned to the maturing fediverse as a decentralized refuge, specifically the more reddit-like cousins of Mastodon, Lemmy and Kbin. Reddit, echoing Twitter once again, also came under fire for briefly banning users and subreddits related to these fediverse alternatives. While the protests continued well beyond their initial scope, and continued to remain in the public eye, order was eventually restored. However, the formerly fringe alternatives in the fediverse continue to be active and improving.

Some of our friends are hard at work figuring out what comes next.

Finally, while these projects made great strides in gaining adoption and improving usability, many remain generally small and under-resourced. For the decentralized social web to succeed, it must be sustainable and maintain high standards for how users are treated and safeguarded. These indie hosts face similar liability risks and governmental threats as the billion dollar companies. In a harrowing example we saw this year, an FBI raid on a Mastodon server admin for unrelated reasons resulted in the seizure of an unencrypted server database. It’s a situation that echoes EFF’s founding case over 30 years ago, Steve Jackson Games v. Secret Service, and it underlines the need for small hosts to be prepared to guard against government overreach.

With so much momentum towards better tools and a wider adoption of better standards, we remain optimistic about the future of these federated projects.

Innovative Peer-to-Peer Apps

This year has also seen continued work on components of the web that live further down the stack, in the form of protocols and libraries that most people never interact with but which enable the decentralized services that users rely on every day. The ActivityPub protocol, for example, describes how all the servers that make up the fediverse communicate with each other. ActivityPub opened up a world of federated decentralized social media—but progress isn't stopping there.

Some of our friends are hard at work figuring out what comes next. The Veilid project was officially released in August, at DEFCON, and the Spritely project has been throwing out impressive news and releases all year long. Both projects promise to revolutionize how we can exchange data directly from person to person, securely and privately, and without needing intermediaries. As we wrote, we’re looking forward to seeing where they lead us in the coming year.

The European Union’s Digital Markets Act went into effect in May of 2023, and one of its provisions requires that messaging platforms greater than a certain size must interoperate with other competitors. While each service with obligations under the DMA could offer its own bespoke API to satisfy the law’s requirements, the better result for both competition and users would be the creation of a common protocol for cross-platform messaging that is open, relatively easy to implement, and, crucially, maintains end-to-end encryption for the protection of end users. Fortunately, the More Instant Messaging Interoperability (MIMI) working group at the Internet Engineering Task Force (IETF) has taken up that exact challenge. We’ve been keeping tabs on the group and are optimistic about the possibility of open interoperability that promotes competition and decentralization while protecting privacy.

EFF on DWeb Policy

DWeb Camp 2023

The “star-studded gala” (such as it is) of the decentralized web, DWeb Camp, took place this year among the redwoods of Northern California over a weekend in late June. EFF participated in a number of panels focused on the policy implications of decentralization, how to influence policy makers, and the future direction of the decentralized web movement. The opportunity to connect with others working on both policy and engineering was invaluable, as were the contributions from those living outside the US and Europe.  

Blockchain Testimony

Blockchains have been the focus of plenty of legislators and regulators in the past handful of years, but most of the focus has been on the financial uses and implications of the tool. EFF had a welcome opportunity to direct attention toward the less-often discussed other potential uses of blockchains when we were invited to testify before the United States House Energy and Commerce Committee Subcommittee on Innovation, Data, and Commerce. The hearing focused specifically on non-financial uses of blockchains, and our testimony attempted to cut through the hype to help members of Congress understand what it is and how and when it can be helpful while being clear about its potential downsides. 

The overarching message of our testimony was that blockchain at the end of the day is just a tool and, just as with other tools, Congress should refrain from regulating it specifically because of what it is. The other important point we made was that the individuals that contribute open source code to blockchain projects should not, absent some other factor, be the ones held responsible for what others do with the code they write.

A decentralized system means that individuals can “shop” for the moderation style that best suits their preferences.

Moderation in Decentralized Social Media

One of the major issues brought to light by the rise of decentralized social media such as Bluesky and the fediverse this year has been the promises and complications of content moderation in a decentralized space. On centralized social media, content moderation can seem more straightforward. The moderation team has broad insight into the whole network, and, for the major platforms most people are used to, these centralized services have more resources to maintain a team of moderators. Decentralized social media has its own benefits when it comes to moderation, however. For example, a decentralized system means that individuals can “shop” for the moderation style that best suits their preferences. This community-level moderation may scale better than centralized models, as moderators have more context and personal investment in the space

But decentralized moderation is certainly not a solved problem, which is why the Atlantic Council created the Task Force for a Trustworthy Future Web. The Task Force started out by compiling a comprehensive report on the state of trust and safety work in social media and the upcoming challenges in the space. They then conducted a series of public and private consultations focused on the challenges of content moderation in these new platforms. Experts from many related fields were invited to participate, including EFF, and we were excited to offer our thoughts and to hear from the other assembled groups. The Task Force is compiling a final report that will synthesize the feedback and which should be out early next year.

The past year has been a strong one for the decentralization movement. More and more people are realizing that the large centralized services are not all there is to the internet, and exploration of alternatives is happening at a level that we haven’t seen in at least a decade. New services, protocols, and governance models are also popping up all the time. Throughout the year we have tried to guide newcomers through the differences in decentralized services, inform public policies surrounding these technologies and tools, and help envision where the movement should grow next. We’re looking forward to continuing to do so in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Kids Online Safety Shouldn’t Require Massive Online Censorship and Surveillance: 2023 Year in Review

Par : Jason Kelley
28 décembre 2023 à 11:25

There’s been plenty of bad news regarding federal legislation in 2023. For starters, Congress has failed to pass meaningful comprehensive data privacy reforms. Instead, legislators have spent an enormous amount of energy pushing dangerous legislation that’s intended to limit young people’s use of some of the most popular sites and apps, all under the guise of protecting kids. Unfortunately, many of these bills would run roughshod over the rights of young people and adults in the process. We spent much of the year fighting these dangerous “child safety” bills, while also pointing out to legislators that comprehensive data privacy legislation would be more likely to pass constitutional muster and address many of the issues that these child safety bills focus on. 

But there’s also good news: so far, none of these dangerous bills have been passed at the federal level, or signed into law. That's thanks to a large coalition of digital rights groups and other organizations pushing back, as well as tens of thousands of individuals demanding protections for online rights in the many bills put forward.

Kids Online Safety Act Returns

The biggest danger has come from the Kids Online Safety Act (KOSA). Originally introduced in 2022, it was reintroduced this year and amended several times, and as of today, has 46 co-sponsors in the Senate. As soon as it was reintroduced, we fought back, because KOSA is fundamentally a censorship bill. The heart of the bill is a “Duty of Care” that the government will force on a huge number of websites, apps, social networks, messaging forums, and online video games. KOSA will compel even the smallest online forums to take action against content that politicians believe will cause minors “anxiety,” “depression,” or encourage substance abuse, among other behaviors. Of course, almost any content could easily fit into these categories—in particular, truthful news about what’s going on in the world, including wars, gun violence, and climate change. Kids don’t need to fall into a  wormhole of internet content to get anxious; they could see a newspaper on the breakfast table. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote.

KOSA will empower every state’s attorney general as well as the Federal Trade Commission (FTC) to file lawsuits against websites or apps that the government believes are failing to “prevent or mitigate” the list of bad things that could influence kids online. Platforms affected by KOSA would likely find it impossible to filter out this type of “harmful” content, though they would likely try. Online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave, and institute age verification tools to ensure that it happens. Age verification systems are surveillance systems that threaten everyone’s privacy. Mandatory age verification, and with it, mandatory identity verification, is the wrong approach to protecting young people online.

The Senate passed amendments to KOSA later in the year, but these do not resolve its issues. As an example, liability under the law was shifted to be triggered only for content that online services recommend to users under 18, rather than content that minors specifically search for. In practice, that means platforms could not proactively show content to young users that could be “harmful,” but could present that content to them. How this would play out in practice is unclear; search results are recommendations, and future recommendations are impacted by previous searches. But however it’s interpreted, it’s still censorship—and it fundamentally misunderstands how search works online. Ultimately, no amendment will change the basic fact that KOSA’s duty of care turns what is meant to be a bill about child safety into a censorship bill that will harm the rights of both adult and minor users. 

Fortunately, so many people oppose KOSA that it never made it to the Senate floor for a full vote. In fact, even many of the young people it is intended to help are vehemently against it. We will continue to oppose it in the new year, and urge you to contact your congressperson about it today

Most KOSA Alternatives Aren’t Much Better

KOSA wasn’t the only child safety bill Congress put forward this year. The Protecting Kids on Social Media Act would combine some of the worst elements of other social media bills aimed at “protecting the children” into a single law. It includes elements of KOSA as well as several ideas pulled from state bills that have passed this year, such as Utah’s surveillance-heavy Social Media Regulations law

When originally introduced, the Protecting Kids on Social Media Act had five major components: 

  • A mandate that social media companies verify the ages of all account holders, including adults 
  • A ban on children under age 13 using social media at all
  • A mandate that social media companies obtain parent or guardian consent before minors over 12 years old and under 18 years old may use social media
  • A ban on the data of minors (anyone over 12 years old and under 18 years old) being used to inform a social media platform’s content recommendation algorithm
  • The creation of a digital ID pilot program, instituted by the Department of Commerce, for citizens and legal residents, to verify ages and parent/guardian-minor relationships

EFF is opposed to all of these components, and has written extensively about why age verification mandates and parental consent requirements are generally dangerous and likely unconstitutional. 

In response to criticisms, senators updated the bill to remove some of the most flagrantly unconstitutional provisions: it no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media.  

One silver lining to this fight is that it has activated young people. 

Still, it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition. It would still prohibit children under 13 from using any ad-based social media, despite the vast majority of content on social media being lawful speech fully protected by the First Amendment. If enacted, the bill would suffer a similar fate to a California law struck down in 2011 for violating the First Amendment, which was aimed at restricting minors’ access to violent video games. 

What’s Next

One silver lining to this fight is that it has activated young people. The threat of KOSA, as well as several similar state-level bills that did pass, has made it clear that young people may be the biggest target for online censorship and surveillance, but they are also a strong weapon against them

The authors of these bills have good, laudable intentions. But laws that would force platforms to determine the age of their users are privacy-invasive, and laws that restrict speech—even if only for those who can’t prove they are above a certain age—are censorship laws. We expect that KOSA, at least, will return in one form or another. We will be ready when it does.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Spritely and Veilid: Exciting Projects Building the Peer-to-Peer Web

13 décembre 2023 à 12:49

While there is a surge in federated social media sites, like Bluesky and Mastodon, some technologists are hoping to take things further than this model of decentralization with fully peer-to-peer applications. Two leading projects, Spritely and Veilid, hint at what this could look like.

There are many technologies used behind the scenes to create decentralized tools and platforms. There has been a lot of attention lately, for example, around interoperable and federated social media sites using ActivityPub, such as Mastodon, as well as platforms like BlueSky using a similar protocol. These types of services require most individuals to sign up with an intermediary service host in order to participate, but they are decentralized in so far as any user has a choice of intermediary, and can run one of those services themselves while participating in the larger network.

Another model for decentralized communications does away with the intermediary services altogether in favor of a directly peer-to-peer model. This model is technically much more challenging to implement, particularly in cases where privacy and security are crucial, but it does result in a system that gives individuals even more control over their data and their online experience. Fortunately, there are a few projects being developed that are aiming to make purely peer-to-peer applications achievable and easy for developers to create. Two leading projects in this effort are Spritely and Veilid.

Spritely

Spritely is worth keeping an eye on. Being developed by the Institute of the same name, Spritely is a framework for building distributed apps that don’t even have to know that they’re distributed. The project is spearheaded by Christine Lemmer-Webber, who was one of the co-authors of the ActivityPub spec that drives the fediverse. She is taking the lessons learned from that work, combining them with security and privacy minded object capabilities models, and mixing it all up into a model for peer to peer computation that could pave the way for a generation of new decentralized tools.

Spritely is so promising because it is tackling one of the hard questions of decentralized technology: how do we protect privacy and ensure security in a system where data is passing directly between people on the network? Our best practices in this area have been shaped by many years of centralized services, and tackling the challenges of a new paradigm will be important.

One of the interesting techniques that Spritely is bringing to bear on the problem is the concept of object capabilities. OCap is a framework for software design that only gives processes the ability to view and manipulate data that they’ve been given access to. That sounds like common sense, but it is in contrast to the way that most of our computers work, in which the game Minesweeper (just to pick one example) has full access to your entire home directory once you start it up. That isn’t to say that it or any other program is actually reading all your documents, but it has the ability to, which means that a security flaw in that program could exploit that ability.

The Spritely Institute is combining OCap with a message passing protocol that doesn’t care if the other party it's communicating with is on the same device, on another device in the same room, or on the other side of the world. And to top things off they’re working on the protocol in the open, with a handful of other dedicated organizations. We’re looking forward to seeing what the Spritely team creates and what their work enables in the future.

Veilid

Another leading project in the push for full p2p apps was just announced a few months ago. The Veilid project was released at DEFCON 31 in August and has a number of promising features that could lead to it being a fundamental tool in future decentralized systems. Described as a cross between TOR and Interplanetary File System (IPFS), Veilid is a framework and protocol that offers two complementary tools. The first is private routing, which, much like TOR, can construct an encrypted private tunnel over the public internet allowing two devices to communicate with each other without anyone else on the network knowing who is talking to whom.

The second tool that Veilid offers is a Distributed Hash Table (DHT), which lets anyone look up a bit of data associated with a specific key, wherever that data lives on the network. DHTs go all the way back to Bittorrent’s tracker, where they help direct users to other nodes in the network that have the chunk of a file that they need, and they form the backbone of IPFS’s system. Veilid’s DHT is particularly intriguing because it is “multi-writer.” In most DHTs, only one party can set the value stored at a particular key, but in Veilid the creator of a DHT key can choose to share the writing capability with others, creating a system where nodes can communicate by leaving notes for each other in the DHT. Veilid has created an early alpha of a chat program, VeilidChat, based on exactly this feature.

Both of these features are even more valuable because Veilid is a very mobile-friendly framework. The library is available for a number of platforms and programming languages, including the cross-platform Flutter framework, which means it is easy to build iOS and Android apps that use it. Mobile has been a difficult platform to build peer-to-peer apps on for a variety of reasons, so having a turn-key solution in the form of Veilid could be a game changer for decentralization in the next couple years. We’re excited to see what gets built on top of it.

Public interest in decentralized tools and services is growing, as people realize that there are downsides to centralized control over the platforms that connect us all. The past year has seen interest in networks like the fediverse and Bluesky explode and there’s no reason to expect that to change. Projects like Spritely and Veilid are pushing the boundaries of how we might build apps and services in the future. The things that they are making possible may well form the foundation of social communication on the internet in the next decade, making our lives online more free, secure, and resilient.

Meta Announces End-to-End Encryption by Default in Messenger

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Protecting Kids on Social Media Act: Amended and Still Problematic

Senators who believe that children and teens must be shielded from social media have updated the problematic Protecting Kids on Social Media Act, though it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition.  

As we wrote in August, the original bill (S. 1291) contained a host of problems. A recent draft of the amended bill gets rid of some of the most flagrantly unconstitutional provisions: It no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media. 

However, the amended bill is still rife with issues.   

The biggest is that it prohibits children under 13 from using any ad-based social media. Though many social media platforms do require users to be over 13 to join (primarily to avoid liability under COPPA), some platforms designed for young people do not.  Most platforms designed for young people are not ad-based, but there is no reason that young people should be barred entirely from a thoughtful, cautious platform that is designed for children, but which also relies on contextual ads. Were this bill made law, ad-based platforms may switch to a fee-based model, limiting access only to young people who can afford the fee. Banning children under 13 from having social media accounts is a massive overreach that takes authority away from parents and infringes on the First Amendment rights of minors.  

The vast majority of content on social media is lawful speech fully protected by the First Amendment. Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media. At the same time, parents have a right to oversee their children’s online activities. But the First Amendment forbids Congress from making a freewheeling determination that children can be blocked from accessing lawful speech. The Supreme Court has ruled that there is no children’s exception to the First Amendment.   

Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media.

Perhaps recognizing this, the amended bill includes a caveat that children may still view publicly available social media content that is not behind a login, or through someone else’s account (for example, a parent’s account). But this does not help the bill. Because the caveat is essentially a giant loophole that will allow children to evade the bill’s prohibition, it raises legitimate questions about whether the sponsors are serious about trying to address the purported harms they believe exist anytime minors access social media. As the Supreme Court wrote in striking down a California law aimed at restricting minors’ access to violent video games, a law that is so “wildly underinclusive … raises serious doubts about whether the government is in fact pursuing the interest it invokes….” If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

Another problem: The amended bill employs a new standard for determining whether platforms know the age of users: “[a] social media platform shall not permit an individual to create or maintain an account if it has actual knowledge or knowledge fairly implied on the basis of objective circumstances that the individual is a child [under 13].” As explained below, this may still force online platforms to engage in some form of age verification for all their users. 

While this standard comes from FTC regulatory authority, the amended bill attempts to define it for the social media context. The amended bill directs courts, when determining whether a social media company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor, to consider “competent and reliable empirical evidence, taking into account the totality of the circumstances, including whether the operator, using available technology, exercised reasonable care.” But, according to the amended bill, “reasonable care” is not meant to mandate “age gating or age verification,” the collection of “any personal data with respect to the age of users that the operator is not already collecting in the normal course of business,” the viewing of “users’ private messages” or the breaking of encryption. 

While these exclusions provide superficial comfort, the reality is that companies will take the path of least resistance and will be incentivized to implement age gating and/or age verification, which we’ve raised concerns about many times over. This bait-and-switch tactic is not new in bills that aim to protect young people online. Legislators, aware that age verification requirements will likely be struck down, are explicit that the bills do not require age verification. Then, they write a requirement that would lead most companies to implement age verification or else face liability.  

If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

In practice, it’s not clear how a court is expected to determine whether a company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor in the event of an enforcement action. In this case, while the lack of age gating/age verification mechanisms may not be proof that a company failed to exercise reasonable care in letting a child under 13 use the site,; the use of age gating/age verification tools to deny children under 13 the ability to use a social media site will surely be an acceptable way to avoid liability. Moreover, without more guidance, this standard of “reasonable care” is quite vague, which poses additional First Amendment and due process problems. 

Finally, although the bill no longer creates a digital ID pilot program for age verification, it still tries to push the issue forward. The amended bill orders a study and report looking at “current available technology and technologically feasible methods and options for developing and deploying systems to provide secure digital identification credentials; and systems to verify age at the device and operating system level.” But any consideration of digital identification for age verification is dangerous, given the risk of sliding down the slippery slope toward a national ID that is used for many more things than age verification and that threatens individual privacy and civil liberties. 

Platforms Must Stop Unjustified Takedowns of Posts By and About Palestinians

Legal intern Muhammad Essa Fasih contributed to this post.

Social media is a crucial means of communication in times of conflict—it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity. Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.

In the weeks since war between Hamas and Israel began,
social media platforms have removed content from or suspended accounts of Palestinian news sites, activists, journalists, students, and Arab citizens in Israel, interfering with the dissemination of news about the conflict and silencing voices expressing concern for Palestinians.

The platforms say some takedowns were caused by security issues, technical glitches, mistakes that have been fixed, or stricter rules meant to reduce hate speech. But users complain of
unexplained removals of posts about Palestine since the October 7 Hamas terrorist attacks.

Meta’s Facebook
shut down the page of independent Palestinian website Quds News Network, a primary source of news for Palestinians with 10 million followers. The network said its Arabic and English news pages had been deleted from Facebook, though it had been fully complying with Meta's defined media standards. Quds News Network has faced similar platform censorship before—in 2017, Facebook censored its account, as did Twitter in 2020.

Additionally, Meta’s
Instagram has locked or shut down accounts with significant followings. Among these are Let’s Talk Palestine, an account with over 300,000 followers that shows pro-Palestinian informative content, and Palestinian media outlet 24M. Meta said the accounts were locked for security reasons after signs that they were compromised.

The account of the news site Mondoweiss was also 
banned by Instagram and taken down on TikTok, later restored on both platforms.

Meanwhile, Instagram, Tiktok, and LinkedIn users sympathetic to or supportive of the plight of Palestinians have
complained of “shadow banning,” a process in which the platform limits the visibility of a user's posts without notifying them. Users say the platform limited the visibility of posts that contained the Palestinian flag.

Meta has
admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules. Responding to a surge in hate speech after Oct.7, the company lowered the threshold for predicting whether comments qualify as harassment or incitement to violence from 80 percent to 25 percent for users in Palestinian territories. Some content creators are using code words and emojis and shifting the spelling of certain words to evade automated filtering. Meta needs to be more transparent about decisions that downgrade users’ speech that does not violate its rules.

For some users, posts have led to more serious consequences. Palestinian citizens of Israel, including well-known singer Dalal Abu Amneh from Nazareth,
have been arrested for social media postings about the war in Gaza that are alleged to express support for the terrorist group Hamas.

Amneh’s case demonstrates a disturbing trend concerning social media posts supporting Palestinians. Amneh’s post of the
Arabic motto “There is no victor but God” and the Palestinian flag was deemed as incitement. Amneh, whose music celebrates Palestinian heritage, was expressing religious sentiment, her lawyer said, not calling for violence as the police claimed.

She
received hundreds of death threats and filed a complaint with Israeli police, only to be taken into custody. Her post was removed. Israeli authorities are treating any expression of support or solidarity with Palestinians as illegal incitement, the lawyer said.

Content moderation does not work at scale even in the best of times, as we have said
repeatedly. At all times, mistakes can lead to censorship; during armed conflicts they can have devastating consequences.

Whether through content moderation or technical glitches, platforms may also unfairly label people and communities. Instagram, for example, inserted the word “terrorist” into the profiles of some Palestinian users when its auto-translation converted the Palestinian flag emoji followed by the Arabic word for “Thank God” into “Palestinian terrorists are fighting for their freedom.” Meta 
apologized for the mistake, blaming it on a bug in auto-translation. The translation is now “Thank God.”

Palestinians have long fought 
private censorship, so what we are seeing now is not particularly new. But it is growing at a time when online speech protections are sorely needed. We call on companies to clarify their rules, including any specific changes that have been made in relation to the ongoing war, and to stop the knee jerk reaction to treat posts expressing support for Palestinians—or notifying users of peaceful demonstrations, or documenting violence and the loss of loved ones—as incitement and to follow their own existing standards to ensure that moderation remains fair and unbiased.

Platforms should also follow the 
Santa Clara Principles on Transparency and Accountability in Content Moderation notify users when, how, and why their content has been actioned, and give them  the opportunity to appeal. We know Israel has worked directly with Facebook, requesting and garnering removal of content it deemed incitement to violence, suppressing posts by Palestinians about human rights abuses during May 2021 demonstrations that turned violent.

The horrific violence and death in Gaza is heartbreaking. People are crying out to the world, to family and friends, to co-workers, religious leaders, and politicians their grief and outrage. Labeling large swaths of this outpouring of emotion by Palestinians as incitement is unjust and wrongly denies people an important outlet for expression and solace.

Young People May Be The Biggest Target for Online Censorship and Surveillance—and the Strongest Weapon Against Them

Par : Jason Kelley
30 octobre 2023 à 15:54

Over the last year, state and federal legislatures have tried to pass—and in some cases succeeded in passing—legislation that bars young people from digital spaces, censors what they are allowed to see and share online, and monitors and controls when and how they can do it. 

EFF and many other digital rights and civil liberties organizations have fought back against these bills, but the sheer number is alarming. At times it can be nearly overwhelming: there are bills in Texas, Utah, Arkansas, Florida, Montana; there are federal bills like the Kids Online Safety Act and the Protecting Kids on Social Media Act. And there’s legislation beyond the U.S., like the UK’s Online Safety Bill

JOIN EFF AT the neon level 

Young people, too, have fought back. In the long run, we believe we’ll win, together—and because of your help. We’ve won before: In the 1990’s, Congress enacted sweeping legislation that would have curtailed online rights for people of all ages. But that law was aimed, like much of today’s legislation, at young people like you. Along with the ACLU, we challenged the law and won core protections for internet rights in a Supreme Court case, Reno v. ACLU, that recognized that free speech on the Internet merits the highest standards of Constitutional protection. The Court’s decision was its first involving the Internet. 

Even before that, EFF was fighting on the side of teens living on the cutting edge of the ‘net (or however they described it then). In 1990, a Secret Service dragnet called Operation Sundevil seized more than 40 computers from young people in 14 American cities. EFF was formed in part to protect those youths.

So the current struggle isn’t new. As before, young people are targeted by governments, schools, and sometimes parents, who either don’t understand or won’t admit the value that online spaces, and technology generally, offer, no matter your age. 

And, as before, today’s youth aren’t handing over their rights. Tens of thousands of you have vocally opposed flawed censorship bills like KOSA. You’re using the digital tools that governments want to strip you of to fight back, rallying together on Discords and across social media to protect online rights. 

If we don’t succeed in legislatures, know that we will push back in courts, and we will continue building technology for a safe, encrypted internet that anyone, of any age, can access without fear of surveillance or government censorship. 

If you’re a young person eager to help protect your online rights, we’ve put together a few of our favorite ways below to help guide you. We hope you’ll join us, however you can.

Here’s How to Take Your Rights With You When You Go Online—At Any Age

Join EFF at a Special “Neon” Level Membership for Just $18

The huge numbers of young people working hard to oppose the Kids Online Safety Act has been inspiring. Whatever happens, EFF will be there to keep fighting—and you can help us keep up the fight by becoming an EFF member. 

We’ve created a special Neon membership level for anyone under 18 that’s the lowest price we’ve ever offered–just $18 for a year’s membership. If you can, help support the activists, technologists, and attorneys defending privacy, digital creativity, and internet freedom for everyone by becoming an EFF member with a one-time donation. You’ll get a sticker pack (see below), insider briefings, and more. 

JOIN EFF at the neon level 

We aren’t verifying any ages for this membership level because we trust you. (And, because we oppose online age verification laws—read more about why here.)

Gift a Neon Membership 

Not a young person, but have one in your life that cares about digital rights? You can also gift a Neon membership! Membership helps us build better tech, better laws, and a better internet at a time when the world needs it most. Every generation must fight for their rights, and now, that battle is online. If you know a teen that cares about the internet and technology, help make them an EFF member! 

Speak Up with EFF’s Action Center

Young people—and people of every age—have already sent thousands of messages to Congress this year advocating against dangerous bills that would limit their access to online spaces, their privacy, and their ability to speak out online. If you haven’t done so, make sure that legislators writing bills that affect your digital life hear from you by visiting EFF’s Action Center, where you can quickly send messages to your representatives at the federal and state level (and sometimes outside of the U.S., if you live elsewhere). Take our action for KOSA today if you haven’t yet: 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Other bills that might interest you, as of October 2023, are the Protecting Kids on Social Media Act and the RESTRICT Act

If you’re under 18, you should know that many more pieces of legislation at the state level have passed or are pending this year that would impact you. You can always reach out to your representatives even if we don’t have an Action Center message available by finding the legislation here, for example, and the contact info of your rep on their website.

Protect Your Privacy with Surveillance Self-Defense

Protecting yourself online as a young person is often more complicated than it is for others. In addition to threats to your privacy posed by governments and companies, you may also want to protect some private information from schools, peers, and even parents. EFF’s Surveillance Self-Defense hub is a great place to start learning how to think about privacy, and what steps you can take to ensure information about you doesn’t go anywhere you don’t want. 

Fight for Strong Student Rights

Schools have become a breeding ground for surveillance. In 2023, most kids can tell you: surveillance cameras in school buildings are passé. Nearly all online activity in school is filtered and flagged. Children are accused of cheating by algorithms and given little recourse to prove their innocence. Facial recognition and other dangerous, biased biometric scanning is becoming more and more common.

But it’s not all bad. Courts have expanded some student rights recently. And you can fight back in other ways. For a broad overview, use our Privacy for Students guide to understand how potential surveillance and censorship impacts you, and what to do about it. If it fits, consider following that guide up with our LGBTQ Youth module

If you want to know more, take a deep dive into one of the most common surveillance tools in schools—student monitoring software—with our Red Flag Machine project and quiz. We analyzed public records from GoGuardian, a tool used in thousands of schools to monitor the online activity of millions of students, and what we learned is honestly shocking. 

And don’t forget to follow our other Student Privacy work. We regularly dissect developments in school surveillance, monitoring, censorship, and how they can impact you. 

Start a Local Tech or Digital Rights Group 

Don’t work alone! If you have friends or know others in your area that care about the benefits of technology, the internet, digital rights—or think they just might be interested in them—why not form a club? It can be particularly powerful to share why important issues like free speech, privacy, and creativity matter to you, and having a group behind you if you contact a representative can add more weight to your message. Depending on the group you form, you might also consider joining the EFA! (See below.)

Not sure how to meet with other folks in your area? Why not join an already-started online Discord server of young people fighting back against online censorship, or start your own?

Find Allies or Join other Grassroots Groups in the Electronic Frontier Alliance

The Electronic Frontier Alliance is a grassroots network of community and campus organizations across the United States working to educate our neighbors about the importance of digital rights. Groups of young people can be a great fit for the EFA, which includes chapters of Encode Justice, campus groups in computer science, hacking, tech, and more. You can find allies, or if you build your own group, join up with others. On our EFA site you’ll find toolkits on event organizing, talking to media, activism, and more. 

Speak out on Social Media

Social networks are great platforms for getting your message out into the world, cultivating a like-minded community, staying on top of breaking news and issues, and building a name for yourself. Not sure how to make it happen? We’ve got a toolkit to get you started! Also, do a quick search for some of the issues you care about—like “KOSA,” for example—and take a look at what others are saying. (Young TikTok users have made hundreds of videos describing what’s wrong with KOSA, and Tumblr—yes, Tumblr—has multiple anti-KOSA blogs that have gone viral multiple times.) You can always join in the conversation that way. 

Teach Digital Privacy with SEC 

If you’ve been thinking about digital privacy for a while now, you may want to consider sharing that information with others. The Security Education Companion is a great place to start if you’re looking for lesson plans to teach digital security to others.

In College (or Will Be Soon)? Take the Tor University Challenge

In the Tor University Challenge, you can help advance human rights with free and open-source technology, empowering users to defend against mass surveillance and internet censorship. Tor is a service that helps you to protect your anonymity while using the Internet. It has two parts: the Tor Browser that you can download that allows you to use the Internet anonymously, and the volunteer network of computers that makes it possible for that software to work. Universities are great places to run Tor Relays because they have fast, stable connections and computer science and IT departments that can work with students to keep a relay running, while learning hands-on cybersecurity experience and thinking about global policy, law, and society. 

Visit Tor University to get started. 

Learn about Local Surveillance and Fight Back 

Young people don’t just have to worry about government censorship and school surveillance. Law enforcement agencies routinely deploy advanced surveillance technologies in our communities that can be aimed at anyone, but are particularly dangerous for young black and brown people. Our Street-Level Surveillance resources are designed for members of the public, advocacy organizations, journalists, defense attorneys, and policymakers who often are not getting the straight story from police representatives or the vendors marketing this equipment. But at any age, it’s worth learning how automated license plate readers, gunshot detection, and other police equipment works.

Don’t stop there. Our Atlas of Surveillance documents the police tech that’s actually being deployed in individual communities. Search our database of police tech by entering a city, county, state or agency in the United States. 

Follow EFF

Stay educated about what’s happening in the tech world by following EFF. Sign up for our once- or twice-monthly email newsletter, EFFector. Follow us on Meta, Mastodon, Instagram, TikTok, Bluesky, Twitch, YouTube, and Twitter. Listen to our podcast, How to Fix the Internet, for candid discussions of digital rights issues with some of the smartest people working in the field. 


There are so many ways for people of all ages to fight for and protect the internet for themselves and others. (Just take a look at some of the ways we’ve fought for privacy, free speech, and creativity over the years: an airship, an airplane, and a badger; encrypting pretty much the entire web and also cracking insecure encryption to prove a point; putting together a speculative fiction collection and making a virtual reality game—to name just a few.)

Whether you’re new to the fight, or you’ve been online for decades—we’re glad to have you.

❌
❌