Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.

The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned. Research has shown that a majority of white Americans can already be identified from just 1.3 million users of a similar service, GEDMatch, due to genetic likenesses, even though GEDMatch has a much smaller database of genetic profiles. 23andMe has about ten times as many customers.

Selling a giant trove of our most sensitive data is a bad idea that the company should avoid at all costs. And for now, the company appears to have backed off its consideration of a third-party buyer. Before 23andMe reconsiders, it should at the very least make a series of privacy commitments to all its users. Those should include: 

  • Do not consider a sale to any company with ties to law enforcement or a history of security failures
  • Prior to any acquisition, affirmatively ask all users if they would like to delete their information, with an option to download it beforehand.
  • Prior to any acquisition, seek affirmative consent from all users before transferring user data. The consent should give people a real choice to say “no.” It should be separate from the privacy policy, contain the name of the acquiring company, and be free of dark patterns.
  • Prior to any acquisition, require the buyer to make strong privacy and security commitments. That should include a commitment to not let law enforcement indiscriminately search the database, and to prohibit disclosing any person’s genetic data to law enforcement without a particularized warrant. 
  • Reconsider your own data retention and sharing policies. People primarily use the service to obtain a genetic test. A survey of 23andMe customers in 2017 and 2018 showed that over 40% were unaware that data sharing was part of the company’s business model.  

23andMe is already legally required to provide users in certain states with some of these rights. But 23andMe—and any company considering selling such sensitive data—should go beyond current law to assuage users’ real privacy fears. In addition, lawmakers should continue to pass and strengthen protections for genetic privacy. 

Existing users can demand that 23andMe delete their data 

The privacy of personal genetic information collected by companies like 23andMe is always going to be at some level of risk, which is why we suggest consumers think very carefully before using such a service. Genetic data is immutable and can reveal very personal details about you and your family members. Data breaches are a serious concern wherever sensitive data is stored, and last year’s breach of 23andMe exposed personal information from nearly half of its customers. The data can be abused by law enforcement to indiscriminately search for evidence of a crime. Although 23andMe’s policies require a warrant before releasing information to the police, some other companies do not. In addition, the private sector could use your information to discriminate against you. Thankfully, existing law prevents genetic discrimination in health insurance and employment.  

What Happens to My Genetic Data If 23andMe is Sold to Another Company?

In the event of an acquisition or liquidation through bankruptcy, 23andMe must still obtain separate consent from users in about a dozen states before it could transfer their genetic data to an acquiring company. Users in those states could simply refuse. In addition, many people in the United States are legally allowed to access and delete their data either before or after any acquisition. Separately, the buyer of 23andMe would, at a minimum, have to comply with existing genetic privacy laws and 23andMe's current privacy policies. It would be up to regulators to enforce many of these protections. 

Below is a general legal lay of the land, as we understand it.  

  • 23andMe must obtain consent from many users before transferring their data in an acquisition. Those users could simply refuse. At least a dozen states have passed consumer data privacy laws specific to genetic privacy. For example, Montana’s 2023 law would require consent to be separate from other documents and to list the buyer’s name. While the consent requirements vary slightly, similar laws exist in Alabama, Arizona, California, Kentucky, Nebraska, Maryland, Minnesota, Tennessee, Texas, Virginia, Utah, Wyoming. Specifically, Wyoming’s law has a private right of action, which allows consumers to defend their own rights in court. 
  • Many users have the legal right to access and delete their data stored with 23andMe before or after an acquisition. About 19 states have passed comprehensive privacy laws which give users deletion and access rights, but not all have taken effect. Many of those laws also classify genetic data as sensitive and require companies to obtain consent to process it. Unfortunately, most if not all of these laws allow companies like 23andMe to freely transfer user data as part of a merger, acquisition, or bankruptcy. 
  • 23andMe must comply with its own privacy policy. Otherwise, the company could be sanctioned for engaging in deceptive practices. Unfortunately, its current privacy policy allows for transfers of data in the event of a merger, acquisition, or bankruptcy. 
  • Any buyer of 23andMe would likely have to offer existing users privacy rights that are equal or greater to the ones offered now, unless the buyer obtains new consent. The Federal Trade Commission has warned companies not to engage in the unfair practice of quietly reducing privacy protections of user data after an acquisition. The buyer would also have to comply with the web of comprehensive and genetic-specific state privacy laws mentioned above. 
  • The federal Genetic Information Nondiscrimination Act of 2008 prevents genetic-based discrimination by health insurers and employers. 

What Can You Do to Protect Your Genetic Data Now?

Existing users can demand that 23andMe delete their data or revoke some of their past consent to research. 

If you don’t feel comfortable with a potential sale, you can consider downloading a local copy of your information to create a personal archive, and then deleting your 23andMe account. Doing so will remove all your information from 23andMe, and if you haven’t already requested it, the company will also destroy your genetic sample. Deleting your account will also remove any genetic information from future research projects, though there is no way to remove anything that’s already been shared. We’ve put together directions for archiving and deleting your account here. When you get your archived account information, some of your data will be in more readable formats than others. For example, your “Reports Summary” will arrive as a PDF that’s easy to read and includes information about traits and your ancestry report. Other information, like the family tree, arrives in a less readable format, like a JSON file.

You also may be one of the 80% or so of users who consented to having your genetic data analyzed for medical research. You can revoke your consent to future research as well by sending an email. Under this program, third-party researchers who conduct analyses on that data have access to this information, as well as some data from additional surveys and other information you provide. Third-party researchers include non-profits, pharmaceutical companies like GlaxoSmithKline, and research institutions. 23andMe has used this data to publish research on diseases like Parkinson’s. According to the company, this data is deidentified, or stripped of obvious identifying information such as your name and contact information. However, genetic data cannot truly be de-identified. Even if separated from obvious identifiers like name, it is still forever linked to only one person in the world. And at least one study has shown that, when combined with data from GenBank, a National Institutes of Health genetic sequence database, data from some genealogical databases can result in the possibility of re-identification. 

What Can 23andMe, Regulators, and Lawmakers Do?

Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act

As mentioned above, 23andMe must follow existing law. And it should make a series of additional commitments before ever reconsidering a sale. Most importantly, it must give every user a real choice to say “no” to a data transfer and ensure that any buyer makes real privacy commitments. Other consumer genetic genealogy companies should proactively take these steps as well. Companies should be crystal clear about where the information goes and how it’s used, and they should require an individualized warrant before allowing police to comb through their database. 

Government regulators should closely monitor the company’s plans and press the company to explain how it will protect user data in the event of a transfer of ownership—similar to the FTC’s scrutiny of the prior Facebook WhatsApp acquisition. 

Lawmakers should also work to pass stronger comprehensive privacy protections in general and genetic privacy protections in particular. While many of the state-based genetic privacy laws are a good start, they generally lack a private right of action and only protect a slice of the U.S. population. EFF has long advocated for a strong federal privacy law that includes a private right of action. 

Our DNA is quite literally what makes us human. It is inherently personal and deeply revealing, not just of ourselves but our genetic relatives as well, making it deserving of the strongest privacy protections. Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act, and when they do, EFF will be ready to support them. 

The California Supreme Court Should Help Protect Your Stored Communications

When you talk to your friends and family on Snapchat or Facebook, you should be assured that those services will not freely disclose your communications to the government or other private parties.

That is why the California Supreme Court must take up and reverse the appellate opinion in the case of Snap v. The Superior Court of San Diego County. This opinion dangerously weakens the Stored Communications Act (SCA), which is one of the few federal privacy laws on the books. The SCA prevents certain communications providers from disclosing the content of your communications to private parties or the government without a warrant (or other narrow exceptions).

EFF submitted an amicus letter to the court, along with the Center for Democracy & Technology.

The lower court incorrectly ruled that modern services like Snapchat and Facebook largely do not have to comply with the 1986 law. Since those companies already access the content of your communications for their own business purposes—including to target their behavioral advertising—the lower court held that they can also freely disclose the content of your communications to anyone.

The ruling came in the context of a criminal defendant who sought access to the communications of a deceased victim with a subpoena. In compliance with the law, both Meta and Snap resisted disclosing the information.

The lower court’s opinion conflicts with nearly 40 years of interpretation by Congress and other courts. It ignores the SCA’s primary purpose of protecting your communications from disclosure. And the opinion gives too much weight to companies’ terms of service. Those terms, which almost no one reads, is where most companies bury their own right to access to your communications.

There is no doubt that companies should also be restricted in how they access and use your data, and we need stronger laws to make that happen. For years, EFF has advocated for comprehensive data privacy legislation, including data minimization and a ban on online behavioral advertising. But that does not affect the current analysis of the SCA, which protects against disclosure now.

If the California Supreme Court does not take this up, Meta, Snap, and other providers would be allowed to voluntarily disclose the content of their users’ communications to any other corporations for any reason, to parties in civil litigation, and to the government without a warrant. Private parties could also compel disclosure with a mere subpoena.

Court to California: Try a Privacy Law, Not Online Censorship

In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

The Act, The Rulings, and Our Amicus Brief

In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

  1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
  1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
  1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

Good News: No Online Censorship

The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

“There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

More Good News: A Roadmap for Privacy-First Laws

The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

Ninth Circuit Punts on the AADC’s Age-Verification Provision

The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

Texas Wins $1.4 Billion Biometric Settlement Against Meta. It Would Have Happened Sooner With Consumer Enforcement

31 juillet 2024 à 12:27

In Texas’ first public enforcement of its biometric privacy law, Meta agreed to pay $1.4 billion to settle claims that its now-defunct face recognition system violated state law. The law was first passed in 2001.

As part of the Texas settlement, Meta (formerly Facebook) can seek pre-approval from the state for any future biometric projects. The settlement does not require Meta to destroy any models or algorithms trained on Texas biometric data, as the state urged in its original lawsuit. And the settlement appears designed to avoid that remedy in the future.

Facebook Previously Discontinued Its Face Recognition System

Meta announced in November 2021 that it would shut down its tool that scanned the face of every person in photos posted on the platform. The tool identified and tagged users who had purportedly opted in to the feature. At the time, Meta also announced that it would delete more than a billion face templates.

The company shut down this tool months after agreeing to a $650 million class action settlement brought by Illinois consumers under the state's strong biometric privacy law.

Texas’ Law Has Steep Penalties But Little Enforcement

The Texas settlement nearly three years later is welcome, but it also highlights the need to give consumers their own private right of action to enforce consumer data privacy laws.

The Texas Attorney General has sole authority to enforce the Texas Capture or Use of Biometric Identifier (CUBI) law, which prevents companies from capturing biometric identifiers for a commercial purpose, unless notice and consent is first given. The law has existed for decades, but this is the first time it has been enforced.

State regulators do not always have the resources or the will to aggressively enforce consumer privacy laws. And without strong enforcement, companies will not make compliance a priority. Most of the comprehensive consumer data privacy laws passed at the state level in the past few years lack a private right of action, and there have been few public enforcement actions.

That is why consumers should be empowered to bring lawsuits on their own behalf. That should be the priority in any new law passed. Hopefully, states like Texas will continue to enforce their privacy laws. Still, there is no substitute for consumer enforcement.

Americans Deserve More Than the Current American Privacy Rights Act

16 avril 2024 à 15:03

EFF is concerned that a new federal bill would freeze consumer data privacy protections in place, by preempting existing state laws and preventing states from creating stronger protections in the future. Federal law should be the floor on which states can build, not a ceiling.

We also urge the authors of the American Privacy Rights Act (APRA) to strengthen other portions of the bill. It should be easier to sue companies that violate our rights. The bill should limit sharing with the government and expand the definition of sensitive data. And it should narrow exceptions that allow companies to exploit our biometric information, our so-called “de-identified” data, and our data obtained in corporate “loyalty” schemes.

Despite our concerns with the APRA bill, we are glad Congress is pivoting the debate to a privacy-first approach to online regulation. Reining in companies’ massive collection, misuse, and transfer of everyone’s personal data should be the unifying goal of those who care about the internet. This debate has been absent at the federal level in the past year, giving breathing room to flawed bills that focus on censorship and content blocking, rather than privacy.

In general, the APRA would require companies to minimize their processing of personal data to what is necessary, proportionate, and limited to certain enumerated purposes. It would specifically require opt-in consent for the transfer of sensitive data, and most processing of biometric and genetic data. It would also give consumers the right to access, correct, delete, and export their data. And it would allow consumers to universally opt-out of the collection of their personal data from brokers, using a registry maintained by the Federal Trade Commission.

We welcome many of these privacy protections. Below are a few of our top priorities to correct and strengthen the APRA bill.

Allow States to Pass Stronger Privacy Laws

The APRA should not preempt existing and future state data privacy laws that are stronger than the current bill. The ability to pass stronger bills at the state and local level is an important tool in the fight for data privacy. We ask that Congress not compromise our privacy rights by undercutting the very state-level action that spurred this compromise federal data privacy bill in the first place.

Subject to exceptions, the APRA says that no state may “adopt, maintain, enforce, or continue in effect” any state-level privacy requirement addressed by the new bill. APRA would allow many state sectoral privacy laws to remain, but it would still preempt protections for biometric data, location data, online ad tracking signals, and maybe even privacy protections in state constitutions or some other limits on what private companies can share with the government. At the federal level, the APRA would also wrongly preempt many parts of the federal Communications Act, including provisions that limit a telephone company’s use, disclosure, and access to customer proprietary network information, including location information.

Just as important, it would prevent states from creating stronger privacy laws in the future. States are more nimble at passing laws to address new privacy harms as they arise, compared to Congress which has failed for decades to update important protections. For example, if lawmakers in Washington state wanted to follow EFF’s advice to ban online behavioral advertising or to allow its citizens to sue companies for not minimizing their collection of personal data (provisions where APRA falls short), state legislators would have no power to do so under the new federal bill.

Make It Easier for Individuals to Enforce Their Privacy Rights

The APRA should prevent coercive forced arbitration agreements and class action waivers, allow people to sue for statutory damages, and allow them to bring their case in state court. These rights would allow for rigorous enforcement and help force companies to prioritize consumer privacy.

The APRA has a private right of action, but it is a half-measure that still lets companies side-step many legitimate lawsuits. And the private right of action does not apply to some of the most important parts of the law, including the central data minimization requirement.

The favorite tool of companies looking to get rid of privacy lawsuits is to bury provision in their terms of service that force individuals into private arbitration and prevent class action lawsuits. The APRA does not address class action waivers and only prevents forced arbitration for children and people who allege “substantial” privacy harm. In addition, statutory damages and enforcement in state courts is essential, because many times federal courts still struggle to acknowledge privacy harm as real—relying instead on a cramped view that does not recognize privacy as a human right. In addition, the bill would allow companies to cure violations rather than face a lawsuit, incentivizing companies to skirt the law until they are caught.

Limit Exceptions for Sharing with the Government

APRA should close a loophole that may allow data brokers to sell data to the government and should require the government to obtain a court order before compelling disclosure of user data. This is important because corporate surveillance and government surveillance are often the same.

Under the APRA, government contractors do not have to follow the bill’s privacy protections. Those include any “entity that is collecting, processing, retaining, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, to the extent that such entity is acting as a service provider to the government entity.” Read broadly, this provision could protect data brokers who sell biometric information and location information to the government. In fact, Clearview AI previously argued it was exempt from Illinois’ strict biometric law using a similar contractor exception. This is a point that needs revision because other parts of the bill rightly prevent covered entities (government contractors excluded) from selling data to the government for the purpose of fraud detection, public safety, and criminal activity detection.

The APRA also allows entities to transfer personal data to the government pursuant to a “lawful warrant, administrative subpoena, or other form of lawful process.” EFF urges that the requirement be strengthened to at least a court order or warrant with prompt notice to the consumer. Protections like this are not unique, and it is especially important in the wake of the Dobbs decision.

Strengthen the Definition of Sensitive Data

The APRA has heightened protections for sensitive data, and it includes a long list of 18 categories of sensitive data, like: biometrics, precise geolocation, private communications, and an individual’s online activity overtime and across websites. This is a good list that can be added to. We ask Congress to add other categories, like immigration status, union membership, employment history, familial and social relationships, and any covered data processed in a way that would violate a person’s reasonable expectation of privacy. The sensitivity of data is context specific—meaning any data can be sensitive depending on how it is used. The bill should be amended to reflect that.

Limit Other Exceptions for Biometrics, De-identified Data, and Loyalty Programs

An important part of any bill is to make sure the exceptions do not swallow the rule. The APRA’s exceptions on biometric information, de-identified data, and loyalty programs should be narrowed.

In APRA, biometric information means data “generated from the measurement or processing of the individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to the individual” and excludes “metadata associated with a digital or physical photograph or an audio or video recording that cannot be used to identify an individual.” EFF is concerned this definition will not protect biometric information used for analysis of sentiment, demographics, and emotion, and could be used to argue hashed biometric identifiers are not covered.

De-identified data is excluded from the definition of personal data covered by the APRA, and companies and service providers can turn personal data into de-identified data to process it however they want. The problem with de-identified data is that many times it is not. Moreover, many people do not want their private data that they store in confidence with a company to then be used to improve that company’s product or train its algorithm—even if the data has purportedly been de-identified.

Many companies under the APRA can host loyalty programs and can sell that data with opt-in consent. Loyalty programs are a type of pay-for-privacy scheme that pressure people to surrender their privacy rights as if they were a commodity. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.” At the very least, the bill should be amended to prevent companies from selling data that they obtain from a loyalty program.

We welcome Congress' privacy-first approach in the APRA and encourage the authors to improve the bill to ensure privacy is protected for generations to come.

Victory! EFF Helps Resist Unlawful Warrant and Gag Order Issued to Independent News Outlet

Over the past month, the independent news outlet Indybay has quietly fought off an unlawful search warrant and gag order served by the San Francisco Police Department. Today, a court lifted the gag order and confirmed the warrant is void. The police also promised the court to not seek another warrant from Indybay in its investigation.

Nevertheless, Indybay was unconstitutionally gagged from speaking about the warrant for more than a month. And the SFPD once again violated the law despite past assurances that it was putting safeguards in place to prevent such violations.

EFF provided pro bono legal representation to Indybay throughout the process.

Indybay’s experience highlights a worrying police tactic of demanding unpublished source material from journalists, in violation of clearly established shield laws. Warrants like the one issued by the police invade press autonomy, chill news gathering, and discourage sources from contributing. While this is a victory, Indybay was still gagged from speaking about the warrant, and it would have had to pay thousands of dollars in legal fees to fight the warrant without pro bono counsel. Other small news organizations might not be so lucky. 

It started on January 18, 2024, when an unknown member of the public published a story on Indybay’s unique community-sourced newswire, which allows anyone to publish news and source material on the website. The author claimed credit for smashing windows at the San Francisco Police Credit Union.

On January 24, police sought and obtained a search warrant that required Indybay to turn over any text messages, online identifiers like IP address, or other unpublished information that would help reveal the author of the story. The warrant also ordered Indybay not to speak about the warrant for 90 days. With the help of EFF, Indybay responded that the search warrant was illegal under both California and federal law and requested that the SFPD formally withdraw it. After several more requests and shortly before the deadline to comply with the search warrant, the police agreed to not pursue the warrant further “at this time.” The warrant became void when it was not executed after 10 days under California law, but the gag order remained in place.

Indybay went to court to confirm the warrant would not be renewed and to lift the gag order. It argued it was protected by California and federal shield laws that make it all but impossible for law enforcement to use a search warrant to obtain unpublished source material from a news outlet. California law, Penal Code § 1524(g), in particular, mandates that “no warrant shall issue” for that information. The Federal Privacy Protection Act has some exceptions, but they were clearly not applicable in this situation. Nontraditional and independent news outlets like Indybay are covered by these laws (Indybay fought this same fight more than a decade ago when one of its photographers successfully quashed a search warrant). And when attempting to unmask a source, an IP address can sometimes be as revealing as a reporter’s notebook. In a previous case, EFF established that IP addresses are among the types of unpublished journalistic information typically protected from forced disclosure by law.

In addition, Indybay argued that the gag order was an unconstitutional content-based prior restraint on speech—noting that the government did not have a compelling interest in hiding unlawful investigative techniques.

Rather than fight the case, the police conceded the warrant was void, promised not to seek another search warrant for Indybay’s information during the investigation, and agreed to lift the gag order. A San Francisco Superior Court Judge signed an order confirming that.

That this happened at all is especially concerning since the SFPD had agreed to institute safeguards following its illegal execution of a search warrant against freelance journalist Bryan Carmody in 2019. In settling a lawsuit brought by Carmody, the SFPD agreed to ensure all its employees were aware of its policies concerning warrants to journalists. As a result the department instituted internal guidance and procedures, which do not all appear to have been followed with Indybay.

Moreover, the search warrant and gag order should never have been signed by the court given that it was obviously directed to a news organization. We call on the court and the SFPD to meet with those representing journalists to make sure that we don't have to deal with another unconstitutional gag order and search warrant in another few years.

The San Francisco Police Department's public statement on this case is incomplete. It leaves out the fact that Indybay was gagged for more than a month and that it was only Indybay's continuous resistance that prevented the police from acting on the warrant. It also does not mention whether the police department's internal policies were followed in this case. For one thing, this type of warrant requires approval from the chief of police before it is sought, not after. 

Read more here: 

Stipulated Order

Motion to Quash

Search Warrant

Trujillo Declaration

Burdett Declaration

SFPD Press Release

EFF to Court: Strike Down Age Estimation in California But Not Consumer Privacy

14 février 2024 à 18:44

The Electronic Frontier Foundation (EFF) called on the Ninth Circuit to rule that California’s Age Appropriate Design Code (AADC) violates the First Amendment, while not casting doubt on well-written data privacy laws. EFF filed an amicus brief in the case NetChoice v. Bonta, along with the Center for Democracy & Technology.

A lower court already ruled the law is likely unconstitutional. EFF agrees, but we asked the appeals court to chart a narrower path. EFF argued the AADC’s age estimation scheme and vague terms that describe amorphous “harmful content” render the entire law unconstitutional. But the lower court also incorrectly suggested that many foundational consumer privacy principles cannot pass First Amendment scrutiny. That is a mistake that EFF asked the Ninth Circuit to fix.

In late 2022, California passed the AADC with the goal of protecting children online. It has many data privacy provisions that EFF would like to see in a comprehensive federal privacy bill, like data minimization, strong limits on the processing of geolocation data, regulation of dark patterns, and enforcement of privacy policies.

Government should provide such privacy protections to all people. The protections in the AADC, however, are only guaranteed to children. And to offer those protections to children but not adults, technology companies are strongly incentivized to “estimate the age” to their entire user base—children and adults alike. While the method is not specified, techniques could include submitting a government ID or a biometric scan of your face. In addition, technology companies are required to assess their products to determine if they are designed to expose children to undefined “harmful content” and determine what is in the undefined “best interest of children.”

In its brief, EFF argued that the AADC’s age estimation scheme raises the same problems as other age verification laws that have been almost universally struck down, often with help from EFF. The AADC burdens adults’ and children’s access to protected speech and frustrates all users’ right to speak anonymously online. In addition, EFF argued that the vague terms offer no clear standards, and thus give government officials too much discretion in deciding what conduct is forbidden, while incentivizing platforms to self-censor given uncertainty about what is allowed.

“Many internet users will be reluctant to provide personal information necessary to verify their ages, because of reasonable doubts regarding the security of the services, and the resulting threat of identity theft and fraud,” EFF wrote.

Because age estimation is essential to the AADC, the entire law should be struck down for that reason alone, without assessing the privacy provisions. EFF asked the court to take that narrow path.

If the court instead chooses to address the AADC’s privacy protections, EFF cautioned that many of the principles reflected in those provisions, when stripped of the unconstitutional censorship provisions and vague terms, could survive intermediate scrutiny. As EFF wrote:

“This Court should not follow the approach of the district court below. It narrowly focused on California’s interest in blocking minors from harmful content. But the government often has several substantial interests, as here: not just protection of information privacy, but also protection of free expression, information security, equal opportunity, and reduction of deceptive commercial speech. The privacy principles that inform AADC’s consumer data privacy provisions are narrowly tailored to these interests.”

EFF has a long history of supporting well-written privacy laws against First Amendment attacks. The AADC is not one of them. We have filed briefs supporting laws that protect video viewing history, biometric data, and other internet records. We have advocated for a federal law to protect reproductive health records. And we have written extensively on the need for a strong federal privacy law.

First, Let’s Talk About Consumer Privacy: 2023 Year in Review

29 décembre 2023 à 14:53

Whatever online harms you want to alleviate on the internet today, you can do it better—with a broader impact—if you enact strong consumer data privacy legislation first. That is a grounding principle that has informed much of EFF’s consumer protection work in 2023.

While consumer privacy will not solve every problem, it is superior to many other proposals that attempt to address issues like child mental health or foreign government surveillance. That is true for two reasons: well written consumer privacy laws address the root source of corporate surveillance, and they can withstand constitutional scrutiny.

EFF’s work on this issue includes: (1) advocating for strong comprehensive consumer data privacy laws; (2) fighting bad laws; (3) protecting existing sectoral privacy laws.

Advocating for Strong Comprehensive Consumer Data Privacy


This year, EFF released a report titled “Privacy First: A Better Way to Address Online Harms.” The report listed the key pillars of a strong privacy law (like no online behavioral ads and minimization) and how these principles can help address current issues (like protecting children’s mental health or reproductive health privacy).

We highlighted why data privacy legislation is a form of civil rights legislation and why adtech surveillance often feeds government surveillance.

And we made the case why well-written privacy laws can be constitutional when they regulate the commercial processing of personal data; that personal data is private and not a matter of public concern; and the law is tailored to address the government’s interest in privacy, free expression, security, and guarding against discrimination.

Fighting Bad Laws Based in Censorship of Internet Users


We filed amicus briefs in lawsuits challenging laws in Arkansas and Texas that required internet users to submit to age verification before accessing certain online content. These challenges continue to make their way through the courts, but they have so far been successful. We plan to do the same in a case challenging California’s Age Appropriate Design Code, while cautioning the court not to cast doubt on important privacy principles.

We filed a similar amicus brief in a lawsuit challenging Montana’s TikTok ban, where a federal court recently ruled that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content.

Protecting Existing Sectoral Laws


EFF is also gearing up to file an amicus brief supporting the constitutionality of the federal law called the Video Privacy Protection Act, which limits how video providers can sell or share their users’ private viewing data with third-party companies or the government. While we think a comprehensive privacy law is best, we support strong existing sectoral laws that protect data like video watch history, biometrics, and broadband use records.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

EFF Continues Fight Against Unconstitutional Geofence and Keyword Search Warrants: 2023 Year in Review

22 décembre 2023 à 13:30

EFF continues to fight back against high-tech general warrants that compel companies to search broad swaths of users’ personal data. In 2023, we saw victory and setbacks in a pair of criminal cases that challenged the constitutionality of geofence and keyword searches. 

These types of warrants—mostly directed at Google—cast a dragnet that require a provider to search its entire reserve of user data to either identify everyone in a particular area (geofence) or everyone who has searched for a particular term (keyword). Police generally have no identified suspects. Instead, the usual basis for the warrant is to try and find a suspect by searching everyone’s data.  

EFF has consistently argued these types of warrants lack particularity, are overbroad, and cannot be supported by probable cause. They resemble the unconstitutional “general warrants” at the founding that allowed exploratory rummaging through people’s belongings. 

EFF Helped Argue the First Challenge to a Geofence Warrant at the Appellate Level 

In April, the California Court of Appeal held that a geofence warrant seeking user information on all devices located within several densely-populated areas in Los Angeles violated the Fourth Amendment. It became the first appellate court in the United States to review a geofence warrant. EFF filed an amicus brief and jointly argued the case before the court.

In People v. Meza, the court ruled that the warrant failed to put meaningful restrictions on law enforcement and was overbroad because law enforcement lacked probable cause to identify every person in the large search area. The Los Angeles Sheriff’s Department sought a warrant that would force Google to turn over identifying information for every device with a Google account that was within any of six locations over a five-hour window. The area included large apartment buildings, churches, barber shops, nail salons, medical centers, restaurants, a public library, and a union headquarters.  

Despite ruling the warrant violated the Fourth Amendment, the court refused to suppress the evidence, finding the officers acted in good faith based on a facially valid warrant. The court also unfortunately found that the warrant did not violate California’s landmark Electronic Communications Privacy Act (CalECPA), which requires state warrants for electronic communication information to particularly describe the targeted individuals or accounts “as appropriate and reasonable.” While CalECPA has its own suppression remedy, the court held it only applied when there was a statutory violation, not when the warrant violated the Fourth Amendment alone. This is in clear contradiction to an earlier California geofence case, although that case was at the trial court, not at the Court of Appeal.

EFF Filed Two Briefs in First Big Ruling on Keyword Search Warrants 

In October, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase. In a weak and ultimately confusing opinion, the court upheld the warrant, finding the police relied on it in good faith. EFF filed two amicusbriefs and was heavily involved in the case.

In People v. Seymour, the four-justice majority recognized that people have a constitutionally-protected privacy interest in their internet search queries and that these queries impact a person’s free speech rights. Nonetheless, the majority’s reasoning was cursory and at points mistaken. Although the court found that the Colorado constitution protects users’ privacy interests in their search queries associated with a user’s IP address, it held that the Fourth Amendment does not, due to the third-party doctrine—reasoning that federal courts have held that there is no expectation of privacy in IP addresses. We believe this ruling overlooked key facts and recent precedent. 

EFF Will Continue to Fight to Convince Courts, Legislatures, and Companies  

EFF plans to make a similar argument in a Pennsylvania case in January challenging a keyword warrant served on Google by the state police.  

EFF has consistently argued in court, to lawmakers, and to tech companies themselves that these general warrants do not comport with the constitution. For example, we have urged Google to resist these warrants, be more transparent about their use, and minimize the data that law enforcement can gain access to. Google appears to be taking some of that advice by limiting its own access to users’ location data. The company recently announced a plan to allow users to store their location data directly on their device and automatically encrypt location data in the cloud—so that even Google can’t read it. 

This year, at least one company has proved it is possible to resist geofence warrants by minimizing data collection. In Apple’s latest transparency report, it notes that it “does not have any data to provide in response to geofence warrants.” 

 

 This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

Is Your State’s Child Safety Law Unconstitutional? Try Comprehensive Data Privacy Instead

Comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harm they do to children. Well-written privacy legislation has the added benefit of being constitutional—unlike the flurry of laws that restrict content behind age verification requirements that courts have recently blocked. Such misguided laws do little to protect kids while doing much to invade everyone’s privacy and speech.

Courts have issued preliminary injunctions blocking laws in Arkansas, California, and Texas because they likely violate the First Amendment rights of all internet users. EFF has warned that such laws were bad policy and would not withstand court challenges. Nonetheless, different iterations of these child safety proposals continue to be pushed at the state and federal level.

The answer is to re-focus attention on comprehensive data privacy legislation, which would address the massive collection and processing of personal data that is the root cause of many problems online. Just as important, it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

It Is Comparatively Easy to Write Data Privacy Laws That Are Constitutional

EFF has long pushed for strong comprehensive commercial data privacy legislation and continues to do so. Data privacy legislation has many components. But at its core, it should minimize the amount of personal data that companies process, give users certain rights to control their personal data, and allow consumers to sue when the law is violated.

EFF has argued that privacy laws pass First Amendment muster when they have a few features that ensure the law reasonably fits its purpose. First, they regulate the commercial processing of personal data. Second, they do not impermissibly restrict the truthful publication of matters of public concern. And finally, the government’s interest and law’s purpose is to protect data privacy; expand the free expression that privacy enables; and protect the security of data against insider threats, hacks, and eventual government surveillance. If so, the privacy law will be constitutional if the government shows a close fit between the law’s goals and its means.

EFF made this argument in support of the Illinois Biometric Information Privacy Act (BIPA), and a law in Maine that limits the use and disclosure of personal data collected by internet service providers. BIPA, in particular, has proved wildly important to biometric privacy. For example, it led to a settlement that prohibits the company Clearview AI from selling its biometric surveillance services to law enforcement in the state. Another settlement required Facebook to pay hundreds of millions of dollars for its policy (since repealed) of extracting faceprints from users without their consent.

Courts have agreed. Privacy laws that have been upheld under the First Amendment, or cited favorably by courts, include those that regulate biometric data, health data, credit reports, broadband usage data, phone call records, and purely private conversations.

The Supreme Court, for example, has cited the federal 1996 Health Insurance Portability and Accountability Act (HIPAA) as an example of a “coherent” privacy law, even when it struck down a state law that targeted particular speakers and viewpoints. Additionally, when evaluating the federal Wiretap Act, the Supreme Court correctly held that the law cannot be used to prevent a person from publishing legally obtained communications on matters of public concern. But it otherwise left in place the wiretap restrictions that date back to 1934, designed to protect the confidentiality of private conversations.

It Is Nearly Impossible to Write Age Verification Requirements That Are Constitutional. Just Ask Arkansas, California, and Texas

Federal Courts have recently granted preliminary injunctions that block laws in Arkansas, California, and Texas from going into effect because they likely violate the First Amendment rights of all internet users. While the laws differ from each other, they all require (or strongly incentivize) age verification for all internet users.

The Arkansas law requires age verification for users of certain social media companies, which EFF strongly opposes, and bans minors from those services without parental consent. The court blocked it. The court reasoned that the age verification requirement would deter everyone from accessing constitutionally protected speech and burden anonymous speech. EFF and ACLU filed an amicus brief against this Arkansas law.

In California, a federal court recently blocked the state’s Age-Appropriate Design Code (AADC) under the First Amendment. Significantly, the AADC strongly incentivized websites to require users to verify their age. The court correctly found that age estimation is likely to “exacerbate” the problem of child security because it requires everyone “to divulge additional personal information” to verify their age. The court blocked the entire law, even some privacy provisions we’d like to see in a comprehensive privacy law if they were not intertwined with content limitations and age-gating. EFF does not agree with the court’s reasoning in its entirety because it undervalued the state’s legitimate interest in and means of protecting people’s privacy online. Nonetheless, EFF originally asked the California governor to veto this law, believing that true data privacy legislation has nothing to do with access restrictions.

The Texas law requires age verification for users of websites that post sexual material, and exclusion of minors. The law also requires warnings about sexual content that the court found unsupported by evidence. The court held both provisions are likely unconstitutional. It explained that the age verification requirement, in particular, is “constitutionally problematic because it deters adults’ access to legal sexually explicit material, far beyond the interest of protecting minors.” EFF, ACLU, and other groups filed an amicus brief against this Texas law.

Support Comprehensive Privacy Legislation That Will Stand the Test of Time

Courts will rightly continue to strike down similar age verification and content blocking laws, just as they did 20 years ago. Lawmakers can and should avoid this pre-determined fight and focus on passing laws that will have a lasting impact: strong, well-written comprehensive data privacy.

❌
❌