Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

EFF to New York: Age Verification Threatens Everyone's Speech and Privacy

15 octobre 2024 à 14:11

Young people have a right to speak and access information online. Legislatures should remember that protecting kids' online safety shouldn't require sweeping online surveillance and censorship.

EFF reminded the New York Attorney General of this important fact in comments responding to the state's recently passed Stop Addictive Feeds Exploitation (SAFE) for Kids Act—which requires platforms to verify the ages of people who visit them. After New York's legislature passed the bill, it is now up to the state attorney general's office to write rules to implement it.

We urge the attorney general's office to recognize that age verification requirements are incompatible with privacy and free expression rights for everyone. As we say in our comments:

[O]nline age-verification mandates like that imposed by the New York SAFE For Kids Act are unconstitutional because they block adults from content they have a First Amendment right to access, burden their First Amendment right to browse the internet anonymously, and chill data security- and privacy-minded individuals who are justifiably leery of disclosing intensely personal information to online services. Further, these mandates carry with them broad, inherent burdens on adults’ rights to access lawful speech online. These burdens will not and cannot be remedied by new developments in age-verification technology.

We also noted that none of the methods of age verification listed in the attorney general's call for comments is both privacy-protective and entirely accurate. They each have their own flaws that threaten everyone's privacy and speech rights. "These methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of 'dangerous in one way' to 'dangerous in a different way'," we wrote in the comments.

Read the full comments here: https://www.eff.org/document/eff-comments-ny-ag-safe-kids-sept-2024

Hack of Age Verification Company Shows Privacy Danger of Social Media Laws

Par : Jason Kelley
26 juin 2024 à 19:07

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit. 

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data. 

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter. 

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss  AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court. 

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows. 

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users. 

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms. 

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once. 

EFF Urges Supreme Court to Reject Texas’ Speech-Chilling Age Verification Law

Par : Lisa Femia
21 mai 2024 à 18:01

A Texas age verification law will rob people of anonymity online, chill access to speech for privacy- and security-minded internet users, and entirely block some adults from accessing constitutionally protected online content, EFF argued in a brief filed with the Supreme Court last week.

EFF joined the Woodhull Freedom Foundation in filing a friend-of-the-court brief urging the U.S. Supreme Court to grant review of—and ultimately overturn—the Fifth Circuit’s decision upholding the Texas law.

Last year, the state of Texas passed HB 1181 in a misguided attempt to shield minors from certain online content. The law requires all Texas internet users, including adults, to complete invasive “age verification” procedures on every website the state deems to be at least one-third composed of sexual material. Under the law, adult users must upload sensitive personal records—such as a driver’s license or other photo ID—to access any content on these sites, including non-explicit content. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect.

The Fifth Circuit’s decision disregards important constitutional principles. The First Amendment protects our right to access protected online speech without substantial government interference. For adults, this is true even if that speech constitutes sexual or explicit content. The government cannot burden adult internet users and force them to sacrifice their anonymity, privacy, and security simply to access lawful speech.

EFF’s position is hardly unique. Courts have repeatedly and consistently held similar age verification laws to be unconstitutional due to these and other harms. As EFF noted in its brief, the Fifth Circuit’s decision is an anomaly and has created a split among federal circuit courts. 

In coming to its decision, the Fifth Circuit relied largely on a single Supreme Court case from 1968, involving a law that required an in-person ID check to buy magazines featuring adult content. But online age verification is nothing like flashing an ID card in person to buy a particular physical item.

For one, HB 1181 blocks access to entire websites, not just individual offending magazines. This could include many common, general-purpose websites, so long as only one-third of the content is conceivably adult content. “HB 1181’s requirements are akin to requiring ID every time a user logs into a streaming service like Netflix, regardless of whether they want to watch a G- or R-rated movie,” EFF wrote.

Second, and unlike with in-person age-gates, the only viable way for a website to comply with HB 1181 is to require all users to upload and submit, not just momentarily display, a data-rich government-issued ID or other document with personal identifying information. In its brief, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns.

For example, HB 1181 may permit the Texas government to log and track user access when verification is done via government-issued ID. As the trial court explained, the law “runs the risk that the state can monitor when an adult views sexually explicit materials” and threatens to force individuals “to divulge specific details of their sexuality to the state government to gain access to certain speech.”

Additionally, a person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. EFF noted that HB 1181 does not require all parties who may have access to the data—such as third-party intermediaries, data brokers, or advertisers—to delete that data. This leaves users highly vulnerable to data breaches and other security harms.

Finally, EFF explained that millions of adult internet users would be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.

There are less restrictive alternatives to mass online age-gating that would still protect minors without substantially burdening adults. The trial court, in fact, outlined several of these alternatives in its decision, based on the factual evidence presented by the parties. The Fifth Circuit completely ignored these findings.

EFF has been a steadfast critic of efforts to censor the internet and burden access to online speech. We hope the Supreme Court agrees to hear this appeal and reverses the decision of the Fifth Circuit.

EFF Urges New York Court to Protect Online Speakers’ Anonymity

The First Amendment requires courts to apply a robust balancing test before unmasking anonymous online speakers, EFF explained in an amicus brief it filed recently in a New York State appeal.

In the case on appeal, GSB Gold Standard v. Google, a German company that sells cryptocurrency investments is seeking to unmask an anonymous blogger who criticized the company. Based upon a German court order, the company sought a subpoena that would identify the blogger. The blogger fought back, without success, and they are now appealing.

Like speech itself, the First Amendment right to anonymity fosters and advances public debate and self-realization. Anonymity allows speakers to communicate their ideas without being defined by their identity. Anonymity protects speakers who express critical or unpopular views from harassment, intimidation, or being silenced. And, because powerful individuals or entities’ efforts to punish one speaker through unmasking may well lead others to remain silent, protecting anonymity for one speaker can promote free expression for many others.

Too often, however, corporate or human persons try to abuse the judicial process to unmask anonymous speakers. Thus, courts should apply robust evidentiary and procedural standards before compelling the disclosure of an anonymous speaker’s identity. 

Under these standards, parties seeking to unmask anonymous speakers must first show they have meritorious legal claims, to help ensure that the litigation isn’t a pretext for harassment. Those parties that meet this first step must then also show that their interests in unmasking an anonymous speaker outweigh the speaker’s interests in retaining their anonymity. In this case, the trial court didn’t require the German company to meet this standard, and it could not have in any event.

Courts around the United States have adopted various forms of this test, with EFF often participating as amicus or counsel. We hope that New York follows their lead.

Don’t Fall for the Latest Changes to the Dangerous Kids Online Safety Act 

The authors of the dangerous Kids Online Safety Act (KOSA) unveiled an amended version this week, but it’s still an unconstitutional censorship bill that continues to empower state officials to target services and online content they do not like. We are asking everyone reading this to oppose this latest version, and to demand that their representatives oppose it—even if you have already done so. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA remains a dangerous bill that would allow the government to decide what types of information can be shared and read online by everyone. It would still require an enormous number of websites, apps, and online platforms to filter and block legal, and important, speech. It would almost certainly still result in age verification requirements. Some of its provisions have changed over time, and its latest changes are detailed below. But those improvements do not cure KOSA’s core First Amendment problems. Moreover, a close review shows that state attorneys general still have a great deal of power to target online services and speech they do not like, which we think will harm children seeking access to basic health information and a variety of other content that officials deem harmful to minors.  

We’ll dive into the details of KOSA’s latest changes, but first we want to remind everyone of the stakes. KOSA is still a censorship bill and it will still harm a large number of minors who have First Amendment rights to access lawful speech online. It will endanger young people and impede the rights of everyone who uses the platforms, services, and websites affected by the bill. Based on our previous analyses, statements by its authors and various interest groups, as well as the overall politicization of youth education and online activity, we believe the following groups—to name just a few—will be endangered:  

  • LGBTQ+ Youth will be at risk of having content, educational material, and their own online identities erased.  
  • Young people searching for sexual health and reproductive rights information will find their search results stymied. 
  • Teens and children in historically oppressed and marginalized groups will be unable to locate information about their history and shared experiences. 
  • Activist youth on either side of the aisle, such as those fighting for changes to climate laws, gun laws, or religious rights, will be siloed, and unable to advocate and connect on platforms.  
  • Young people seeking mental health help and information will be blocked from finding it, because even discussions of suicide, depression, anxiety, and eating disorders will be hidden from them. 
  • Teens hoping to combat the problem of addiction—either their own, or that of their friends, families, and neighbors, will not have the resources they need to do so.  
  • Any young person seeking truthful news or information that could be considered depressing will find it harder to educate themselves and engage in current events and honest discussion. 
  • Adults in any of these groups who are unwilling to share their identities will find themselves shunted onto a second-class internet alongside the young people who have been denied access to this information. 

What’s Changed in the Latest (2024) Version of KOSA 

In its impact, the latest version of KOSA is not meaningfully different from those previous versions. The “duty of care” censorship section remains in the bill, though modified as we will explain below. The latest version removes the authority of state attorneys general to sue or prosecute people for not complying with the “duty of care.” But KOSA still permits these state officials to enforce other part of the bill based on their political whims and we expect those officials to use this new law to the same censorious ends as they would have of previous versions. And the legal requirements of KOSA are still only possible for sites to safely follow if they restrict access to content based on age, effectively mandating age verification.   

KOSA is still a censorship bill and it will still harm a large number of minors

Duty of Care is Still a Duty of Censorship 

Previously, KOSA outlined a wide collection of harms to minors that platforms had a duty to prevent and mitigate through “the design and operation” of their product. This includes self-harm, suicide, eating disorders, substance abuse, and bullying, among others. This seemingly anodyne requirement—that apps and websites must take measures to prevent some truly awful things from happening—would have led to overbroad censorship on otherwise legal, important topics for everyone as we’ve explained before.  

The updated duty of care says that a platform shall “exercise reasonable care in the creation and implementation of any design feature” to prevent and mitigate those harms. The difference is subtle, and ultimately, unimportant. There is no case law defining what is “reasonable care” in this context. This language still means increased liability merely for hosting and distributing otherwise legal content that the government—in this case the FTC—claims is harmful.  

Design Feature Liability 

The bigger textual change is that the bill now includes a definition of a “design feature,” which the bill requires platforms to limit for minors. The “design feature” of products that could lead to liability is defined as: 

any feature or component of a covered platform that will encourage or increase the frequency, time spent, or activity of minors on the covered platform, or activity of minors on the covered platform. 

Design features include but are not limited to 

(A) infinite scrolling or auto play; 

(B) rewards for time spent on the platform; 

(C) notifications; 

(D) personalized recommendation systems; 

(E) in-game purchases; or 

(F) appearance altering filters. 

These design features are a mix of basic elements and those that may be used to keep visitors on a site or platform. There are several problems with this provision. First, it’s not clear when offering basic features that many users rely on, such as notifications, by itself creates a harm. But that points to the fundamental problem of this provision. KOSA is essentially trying to use features of a service as a proxy to create liability for speech online that the bill’s authors do not like. But the list of harmful designs shows that the legislators backing KOSA want to regulate online content, not just design.   

For example, if an online service presented an endless scroll of math problems for children to complete, or rewarded children with virtual stickers and other prizes for reading digital children’s books, would lawmakers consider those design features harmful? Of course not. Infinite scroll and autoplay are generally not a concern for legislators. It’s that these lawmakers do not like some lawful content that is accessible via online service’s features. 

What KOSA tries to do here then is to launder restrictions on content that lawmakers do not like through liability for supposedly harmful “design features.” But the First Amendment still prohibits Congress from indirectly trying to censor lawful speech it disfavors.  

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities.

Allowing the government to ban content designs is a dangerous idea. If the FTC decided that direct messages, or encrypted messages, were leading to harm for minors—under this language they could bring an enforcement action against a platform that allowed users to send such messages. 

Regardless of whether we like infinite scroll or auto-play on platforms, these design features are protected by the First Amendment; just like the design features we do like. If the government tried to limit an online newspaper from using an infinite scroll feature or auto-playing videos, that case would be struck down. KOSA’s latest variant is no different.   

Attorneys General Can Still Use KOSA to Enact Political Agendas 

As we mentioned above, the enforcement available to attorneys general has been narrowed to no longer include the duty of care. But due to the rule of construction and the fact that attorneys general can still enforce other portions of KOSA, this is cold comfort. 

For example, it is true enough that the amendments to KOSA prohibit a state from targeting an online service based on claims that in hosting LGBTQ content that it violated KOSA’s duty of care. Yet that same official could use another provision of KOSA—which allows them to file suits based on failures in a platform’s design—to target the same content. The state attorney general could simply claim that they are not targeting the LGBTQ content, but rather the fact that the content was made available to minors via notifications, recommendations, or other features of a service. 

We shouldn’t kid ourselves that the latest version of KOSA will stop state officials from targeting vulnerable communities. And KOSA leaves all of the bill’s censorial powers with the FTC, a five-person commission nominated by the president. This still allows a small group of federal officials appointed by the President to decide what content is dangerous for young people. Placing this enforcement power with the FTC is still a First Amendment problem: no government official, state or federal, has the power to dictate by law what people can read online.  

The Long Fight Against KOSA Continues in 2024 

For two years now, EFF has laid out the clear arguments against this bill. KOSA creates liability if an online service fails to perfectly police a variety of content that the bill deems harmful to minors. Services have little room to make any mistakes if some content is later deemed harmful to minors and, as a result, are likely to restrict access to a broad spectrum of lawful speech, including information about health issues like eating disorders, drug addiction, and anxiety.  

The fight against KOSA has amassed an enormous coalition of people of all ages and all walks of life who know that censorship is not the right approach to protecting people online, and that the promise of the internet is one that must apply equally to everyone, regardless of age. Some of the people who have advocated against KOSA from day one have now graduated high school or college. But every time this bill returns, more people learn why we must stop it from becoming law.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We cannot afford to allow the government to decide what information is available online. Please contact your representatives today to tell them to stop the Kids Online Safety Act from moving forward. 

States Attack Young People’s Constitutional Right to Use Social Media: 2023 Year in Review

Legislatures in more than half of the country targeted young people’s use of social media this year, with many of the proposals blocking adults’ ability to access the same sites. State representatives introduced dozens of bills that would limit young people’s use of some of the most popular sites and apps, either by requiring the companies to introduce or amend their features or data usage for young users, or by forcing those users to get permission from parents, and in some cases, share their passwords, before they can log on. Courts blocked several of these laws for violating the First Amendment—though some may go into effect later this year. 

Fourteen months after California passed the AADC, it feels like a dam has broken.

How did we get to a point where state lawmakers are willing to censor large parts of the internet? In many ways, California’s Age Appropriate Design Code Act (AADC), passed in September of 2022, set the stage for this year’s battle. EFF asked Governor Newsom to veto that bill before it was signed into law, despite its good intentions in seeking to protect the privacy and well-being of children. Like many of the bills that followed it this year, it runs the risk of imposing surveillance requirements and content restrictions on a broader audience than intended. A federal court blocked the AADC earlier this year, and California has appealed that decision.

Fourteen months after California passed the AADC, it feels like a dam has broken: we’ve seen dangerous social media regulations for young people introduced across the country, and passed in several states, including Utah, Arkansas, and Texas. The severity and individual components of these regulations vary. Like California’s, many of these bills would introduce age verification requirements, forcing sites to identify all of their users, harming both minors’ and adults’ ability to access information online. We oppose age verification requirements, which are the wrong approach to protecting young people online. No one should have to hand over their driver’s license, or, worse, provide biometric information, just to access lawful speech on websites.

A Closer Look at State Social Media Laws Passed in 2023

Utah enacted the first child social media regulation this year, S.B. 152, in March. The law prohibits social media companies from providing accounts to a Utah minor, unless they have the express consent of a parent or guardian. We requested that Utah’s governor veto the bill.

We identified at least four reasons to oppose the law, many of which apply to other states’ social media regulations. First, young people have a First Amendment right to information that the law infringes upon. With S.B. 152 in effect, the majority of young Utahns will find themselves effectively locked out of much of the web absent their parents permission. Second, the law  dangerously requires parental surveillance of young peoples’ accounts, harming their privacy and free speech. Third, the law endangers the privacy of all Utah users, as it requires many sites to collect and analyze private information, like government issued identification, for every user, to verify ages. And fourth, the law interferes with the broader public’s First Amendment right to receive information by requiring that all users in Utah tie their accounts to their age, and ultimately, their identity, and will lead to fewer people expressing themselves, or seeking information online. 

Federal courts have blocked the laws in Arkansas and California.

The law passed despite these problems, as did Utah’s H.B. 311, which creates liability for social media companies should they, in the view of Utah lawmakers, create services that are addictive to minors. H.B. 311 is unconstitutional because it imposes a vague and unscientific standard for what might constitute social media addiction, potentially creating liability for core features of a service, such as letting you know that someone responded to your post. Both S.B. 152 and H.B. 311 are scheduled to take effect in March 2024.

Arkansas passed a similar law to Utah's S.B. 152 in April, which requires users of social media to prove their age or obtain parental permission to create social media accounts. A federal court blocked the Arkansas law in September, ruling that the age-verification provisions violated the First Amendment because they burdened everyone's ability to access lawful speech online. EFF joined the ACLU in a friend-of-the-court brief arguing that the statute was unconstitutional.

Texas, in June, passed a regulation similar to the Arkansas law, which would ban anyone under 18 from having a social media account unless they receive consent from parents or guardians. The law is scheduled to take effect in September 2024.

Given the strong constitutional protections for people, including children, to access information without having to identify themselves, federal courts have blocked the laws in Arkansas and California. The Utah and Texas laws are likely to suffer the same fate. EFF has warned that such laws were bad policy and would not withstand court challenges, in large part because applying online regulations specifically to young people often forces sites to use age verification, which comes with a host of problems, legal and otherwise. 

To that end, we spent much of this year explaining to legislators that comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harms they do to children. For an even more detailed account of our suggestions, see Privacy First: A Better Way to Address Online Harms. In short, comprehensive data privacy legislation would address the massive collection and processing of personal data that is the root cause of many problems online, and it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

Of course, states were not alone in their attempt to regulate social media for young people. Our Year in Review post on similar federal legislation that was introduced this year covers that fight, which was successful. Our post on the UK’s Online Safety Act describes the battle across the pond. 2024 is shaping up to be a year of court battles that may determine the future of young people’s access to speak out and obtain information online. We’ll be there, continuing to fight against misguided laws that do little to protect kids while doing much to invade everyone’s privacy and speech rights.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting For Your Digital Rights Across the Country: Year in Review 2023

29 décembre 2023 à 14:42

EFF works every year to improve policy in ways that protect your digital rights in states across the country. Thanks to the messages of hundreds of EFF members across the country, we've spoken up for digital rights this year from Sacramento to Augusta.

Much of EFF's state legislative work has, historically, been in our home state of California—also often the most active state on digital civil liberties issues. This year, the Golden State passed several laws that strengthen consumer digital rights.

Two major laws we supported stand out in 2023. The first is S.B. 244, authored by California Sen. Susan Eggman, which makes it easier for individuals and independent repair shops to access materials and parts needed for maintenance on electronics and appliances. That means that Californians with a broken phone screen or a busted washing machine will have many more options for getting them fixed. Even though some electronics are not included, such as video game consoles, it still raises the bar for other right-to-repair bills.

S.B. 244 is one of the strongest right-to-repair laws in the country, doggedly championed by a group of advocates led by the California Public Interest Research Group, and we were proud to support it.

Another significant win comes with the signing of S.B. 362, also known as the CA Delete Act, authored by California Sen. Josh Becker. Privacy Rights Clearinghouse and Californians for Consumer Privacy led the fight on this bill, which builds on the state's landmark data privacy law and makes it easier for Californians to control their data through the state's data broker registry.

In addition to these wins, several other California bills we supported are now law. These include a measure that will broaden protections for immigration status data and one to facilitate better broadband access.

Health Privacy Is Data Privacy

States across the country continue to legislate at the intersection of digital privacy and reproductive rights. Both in California and beyond, EFF has worked with reproductive justice activists, medical practitioners, and other digital rights advocates to ensure that data from apps, electronic health records, law enforcement databases, and social media posts are not weaponized to prosecute those seeking or aiding those who seek reproductive or gender-affirming care. 

While some states are directly targeting those who seek this type of health care, other states are taking different approaches to strengthen protections. In California, EFF supported a bill that passed into law—A.B. 352, authored by CA Assemblymember Rebecca Bauer-Kahan—which extended the protections of California's health care data privacy law to apps such as period trackers. Washington, meanwhile, passed the "My Health, My Data Act"—H.B. 1155, authored by WA Rep. Vandana Slatter—that, among other protections, prohibits the collection of health data without consent. While EFF did not take a position on H.B. 1155, we do applaud the law's opt-in consent provisions and encourage other states to consider similar bills.

Consumer Privacy Bills Could Be Stronger

Since California passed the California Consumer Privacy Act in 2018, several states have passed their own versions of consumer privacy legislation. Unfortunately, many of these laws have been more consumer-hostile and business-friendly than EFF would like to see. In 2023, eight states—Delaware, Florida, Indiana, Iowa, Montana, Oregon, Tennessee and Texas— passed their own versions of broad consumer privacy bills.

EFF did not support any of these laws, many of which can trace their lineage to a weak Virginia law we opposed in 2021. Yet not all of them are equally bad.

For example, while EFF could not support the Oregon bill after a legislative deal stripped it of its private right of action, the law is a strong starting point for privacy legislation moving forward. While it has its flaws, unique among all other state privacy laws, it requires businesses to share the names of actual third parties, rather than simply the categories of companies that have your information. So, instead of knowing a "data broker" has your information and hitting a dead end in following your own data trail, you can know exactly where to file your next request. EFF participated in a years-long process to bring that bill together, and we thank the Oregon Attorney General's office for their work to keep it as strong as it is.

EFF also wants to give plaudits to Montana for another bill—a strong genetic privacy bill passed this year. The bill is a good starting point for other states, and shows Montana is thinking critically about how to protect people from overbroad data collection and surveillance.

Of course, one post can't capture all the work we did in states this year. In particular, the curious should read our Year in Review post specifically focused on children’s privacy, speech, and censorship bills introduced in states this year. But EFF was able to move the ball forward on several issues this year—and will continue to fight for your digital rights in statehouses from coast to coast.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Protecting Kids on Social Media Act: Amended and Still Problematic

Senators who believe that children and teens must be shielded from social media have updated the problematic Protecting Kids on Social Media Act, though it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition.  

As we wrote in August, the original bill (S. 1291) contained a host of problems. A recent draft of the amended bill gets rid of some of the most flagrantly unconstitutional provisions: It no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media. 

However, the amended bill is still rife with issues.   

The biggest is that it prohibits children under 13 from using any ad-based social media. Though many social media platforms do require users to be over 13 to join (primarily to avoid liability under COPPA), some platforms designed for young people do not.  Most platforms designed for young people are not ad-based, but there is no reason that young people should be barred entirely from a thoughtful, cautious platform that is designed for children, but which also relies on contextual ads. Were this bill made law, ad-based platforms may switch to a fee-based model, limiting access only to young people who can afford the fee. Banning children under 13 from having social media accounts is a massive overreach that takes authority away from parents and infringes on the First Amendment rights of minors.  

The vast majority of content on social media is lawful speech fully protected by the First Amendment. Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media. At the same time, parents have a right to oversee their children’s online activities. But the First Amendment forbids Congress from making a freewheeling determination that children can be blocked from accessing lawful speech. The Supreme Court has ruled that there is no children’s exception to the First Amendment.   

Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media.

Perhaps recognizing this, the amended bill includes a caveat that children may still view publicly available social media content that is not behind a login, or through someone else’s account (for example, a parent’s account). But this does not help the bill. Because the caveat is essentially a giant loophole that will allow children to evade the bill’s prohibition, it raises legitimate questions about whether the sponsors are serious about trying to address the purported harms they believe exist anytime minors access social media. As the Supreme Court wrote in striking down a California law aimed at restricting minors’ access to violent video games, a law that is so “wildly underinclusive … raises serious doubts about whether the government is in fact pursuing the interest it invokes….” If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

Another problem: The amended bill employs a new standard for determining whether platforms know the age of users: “[a] social media platform shall not permit an individual to create or maintain an account if it has actual knowledge or knowledge fairly implied on the basis of objective circumstances that the individual is a child [under 13].” As explained below, this may still force online platforms to engage in some form of age verification for all their users. 

While this standard comes from FTC regulatory authority, the amended bill attempts to define it for the social media context. The amended bill directs courts, when determining whether a social media company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor, to consider “competent and reliable empirical evidence, taking into account the totality of the circumstances, including whether the operator, using available technology, exercised reasonable care.” But, according to the amended bill, “reasonable care” is not meant to mandate “age gating or age verification,” the collection of “any personal data with respect to the age of users that the operator is not already collecting in the normal course of business,” the viewing of “users’ private messages” or the breaking of encryption. 

While these exclusions provide superficial comfort, the reality is that companies will take the path of least resistance and will be incentivized to implement age gating and/or age verification, which we’ve raised concerns about many times over. This bait-and-switch tactic is not new in bills that aim to protect young people online. Legislators, aware that age verification requirements will likely be struck down, are explicit that the bills do not require age verification. Then, they write a requirement that would lead most companies to implement age verification or else face liability.  

If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

In practice, it’s not clear how a court is expected to determine whether a company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor in the event of an enforcement action. In this case, while the lack of age gating/age verification mechanisms may not be proof that a company failed to exercise reasonable care in letting a child under 13 use the site,; the use of age gating/age verification tools to deny children under 13 the ability to use a social media site will surely be an acceptable way to avoid liability. Moreover, without more guidance, this standard of “reasonable care” is quite vague, which poses additional First Amendment and due process problems. 

Finally, although the bill no longer creates a digital ID pilot program for age verification, it still tries to push the issue forward. The amended bill orders a study and report looking at “current available technology and technologically feasible methods and options for developing and deploying systems to provide secure digital identification credentials; and systems to verify age at the device and operating system level.” But any consideration of digital identification for age verification is dangerous, given the risk of sliding down the slippery slope toward a national ID that is used for many more things than age verification and that threatens individual privacy and civil liberties. 

Colorado Supreme Court Upholds Keyword Search Warrant

Today, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase. In a weak and ultimately confusing opinion, the court upheld the warrant, finding the police relied on it in good faith. EFF filed two amicus briefs and was heavily involved in the case.

The case is People v. Seymour, which involved a tragic home arson that killed several people. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.

Like geofence warrants, keyword warrants cast a dragnet that require a provider to search its entire reserve of user data—in this case, queries by one billion Google users. Police generally have no identified suspects; instead, the sole basis for the warrant is the officer’s hunch that the suspect might have searched for something in some way related to the crime.

Keyword warrants rely on the fact that it is virtually impossible to navigate the modern Internet without entering search queries into a search engine like Google's. By some accounts, there are over 1.15 billion websites, and tens of billions of webpages. Google Search processes as many as 100,000 queries every second. Many users have come to rely on search engines to such a degree that they routinely search for the answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant, even friends, family members, doctors, or clergy. Over the course of months and years, there is little about a user’s life that will not be reflected in their search keywords, from the mundane to the most intimate. The result is a vast record of some of users’ most private and personal thoughts, opinions, and associations.

In the Seymour opinion, the four-justice majority recognized that people have a constitutionally-protected privacy interest in their internet search queries and that these queries impact a person’s free speech rights. The federal Supreme Court has held that warrants like this one that target speech are highly suspect so courts must apply constitutional search-and-seizure requirements with “scrupulous exactitude.” Despite recognizing this directive to engage in careful, in-depth analysis, the Seymour majority’s reasoning was cursory and at points mistaken. For example, although the court found that the Colorado constitution protects users’ privacy interests in their search queries, it held that the Fourth Amendment does not, due to the third party doctrine, because federal courts have held that there is no expectation of privacy in IP addresses. However, this overlooks the queries themselves, which many courts have suggested are more akin to the location information that was found to be protected in Carpenter v. United States. Similarly, the Colorado court neglected to address the constitutionality of Google’s initial search of all its users’ search queries because it found that the things seized—users’ queries and IP addresses—were sufficiently narrow. Finally, the court merely assumed without deciding that the warrant lacked probable cause, a shortcut that allowed the court to overlook the warrant's facial deficiency and therefore uphold it on the “good faith exception.”

If the majority had truly engaged with the deep constitutional issues presented by this keyword warrant, it would have found, as the three-justices dissenting on this point did, that keyword warrants “are tantamount to a high-tech version of the reviled ‘general warrants’ that first gave rise to the protections in the Fourth Amendment.” They lack probable cause because a mere hunch that some unknown person might have searched for a specific phrase related to the crime is insufficient to support a search of everyone’s search queries, let alone a specific, previously unnamed individual. And keyword warrants are insufficiently particular because they do next to nothing to narrow the universe of the search.

We are disappointed in the result in this case. Keyword warrants not only have the potential to implicate innocent people, they allow the government to target people for sensitive search terms like the drug mifepristone, or the names of gender-affirming healthcare providers, or information about psychedelic drugs. Even searches that refer to crimes or acts of terror are not themselves criminal in all or even most cases (otherwise historians, reporters, and crime novelists could all be subject to criminal investigation). Dragnet warrants that target speech have no place in a democracy, and we will continue to challenge them in the courts and to support legislation to ban them entirely.

❌
❌