Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Should I Use My State’s Digital Driver’s License?

11 octobre 2024 à 11:56

A mobile driver’s license (often called an mDL) is a version of your ID that you keep on your phone instead of in your pocket. In theory, it would work wherever your regular ID works—TSA, liquor stores, to pick up a prescription, or to get into a bar. This sounds simple enough, and might even be appealing—especially if you’ve ever forgotten or lost your wallet. But there are a few questions you should ask yourself before tossing your wallet into the sea and wandering the earth with just your phone in hand.

In the United States, some proponents of digital IDs promise a future where you can present your phone to a clerk or bouncer and only reveal the information they need—your age—without revealing anything else. They imagine everyone whipping through TSA checkpoints with ease and enjoying simplified applications for government benefits. They also see it as a way to verify identity on the internet, a system that likely censors everyone.

There are real privacy and security trade-offs with digital IDs, and it’s not clear if the benefits are big enough—or exist at all—to justify them.

But if you are curious about this technology, there are still a few things you should know and some questions to consider.

Questions to Ask Yourself

Can I even use a Digital ID anywhere? 

The idea of being able to verify your age by just tapping your phone against an electronic reader—like you may already do to pay for items—may sound appealing. It might make checking out a little faster. Maybe you won’t have to worry about the bouncer at your favorite bar creepily wishing you “happy birthday,” or noting that they live in the same building as you.

Most of these use cases aren’t available yet in the United States. While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used at TSA checkpoints.

For example, in California, only a small handful of convenience stores in Sacramento and Los Angeles currently accept digital IDs for purchasing age-restricted items like alcohol and tobacco. TSA lists airports that support mobile driver’s licenses, but it only works for TSA PreCheck and only for licenses issued in eleven states.

Also, “selective disclosure,” like revealing just your age and nothing else, isn’t always fully baked. When we looked at California’s mobile ID app, this feature wasn’t available in the mobile ID itself, but rather, it was part of the TruAge addon. Even if the promise of this technology is appealing to you, you might not really be able to use it.

Is there a law in my state about controlling how police officers handle digital IDs?

One of our biggest concerns with digital IDs is that people will unlock their phones and hand them over to police officers in order to show an ID. Ordinarily, police need a warrant to search the content of our phones, because they contain what the Supreme Court has properly called “the privacies of life.”

There are some potential technological protections. You can technically get your digital ID read or scanned in the Wallet app on your phone, without unlocking the device completely. Police could also have a special reader like at some retail stores.

But it’s all too easy to imagine a situation where police coerce or trick someone into unlocking their phone completely, or where a person does not even know that they just need to tap their phone instead of unlocking it. Even seasoned Wallet users screw up payment now and again, and doing so under pressure amplifies that risk. Handing your phone over to law enforcement, either to show a QR code or to hold it up to a reader, is also risky since a notification may pop up that the officer could interpret as probable cause for a search.

Currently, there are few guardrails for how law enforcement interacts with mobile IDs. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement, but as far as we know it’s the only state to do anything so far.

At the very minimum, law enforcement should be prohibited from leveraging an mDL check to conduct a phone search.

Is it clear what sorts of tracking the state would use this for?

Smartphones have already made it significantly easier for governments and corporations to track everything we do and everywhere we go. Digital IDs are poised to add to that data collection, by increasing the frequency that our phones leave digital breadcrumbs behind us. There are technological safeguards that could reduce these risks, but they’re currently not required by law, and no technology fix is perfect enough to guarantee privacy.

For example, if you use a digital ID to prove your age to buy a six-pack of beer, the card reader’s verifier might make a record of the holder’s age status. Even if personal information isn’t exchanged in the credential itself, you may have provided payment info associated with this transaction. This collusion of personal information might be then sold to data brokers, seized by police or immigration officials, stolen by data thieves, or misused by employees.

This is just one more reason why we need a federal data privacy law: currently, there aren’t sufficient rules around how your data gets used.

Do I travel between states often?

Not every state offers or accepts digital IDs, so if you travel often, you’ll have to carry a paper ID. If you’re hoping to just leave the house, hop on a plane, and rent a car in another state without needing a wallet, that’s likely still years away.

How do I feel about what this might be used for online?

Mobile driver’s licenses are a clear fit for online age verification schemes. The privacy harms of these sorts of mandates vastly outweigh any potential benefit. Just downloading and using a mobile driver’s license certainly doesn’t mean you agree with that plan, but it’s still good to be mindful of what the future might entail.

Am I being asked to download a special app, or use my phone’s built-in Wallet?

Both Google and Apple allow a few states to use their Wallet apps directly, while other states use a separate app. For Google and Apple’s implementations, we tend to have better documentation and a more clear understanding of how data is processed. For apps, we often know less.

In some cases, states will offer Apple and Google Wallet support, while also providing their own app. Sometimes, this leads to different experiences around where a digital ID is accepted. For example, in Colorado, the Apple and Google Wallet versions will get you through TSA. The Colorado ID app cannot be used at TSA, but can be used at some traffic stops, and to access some services. Conversely, California’s mobile ID comes in an app, but also supports Apple and Google Wallets. Both California’s app and the Apple and Google Wallets are accepted at TSA.

Apps can also come and go. For example, Florida removed its app from the Apple App Store and Google Play Store completely. All these implementations can make for a confusing experience, where you don’t know which app to use, or what features—if any—you might get.

The Right to Paper

For now, the success or failure of digital IDs will at least partially be based on whether people show interest in using them. States will likely continue to implement them, and while it might feel inevitable, it doesn’t have to be. There are countless reasons why a paper ID should continue to be accepted. Not everyone has the resources to own a smartphone, and not everyone who has a smartphone wants to put their ID on it. As states move forward with digital ID plans, privacy and security are paramount, and so is the right to a paper ID.

Note: The Real ID Modernization Act provides one protection for using a mDL we initially missed in this blog post: if you present your phone to federal law enforcement, it cannot be construed as consent to seize or search the device.

Strong End-to-End Encryption Comes to Discord Calls

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.

2 Fast 2 Legal: How EFF Helped a Security Researcher During DEF CON 32

This year, like every year, EFF sent a variety of lawyers, technologists, and activists to the summer security conferences in Las Vegas to help foster support for the security research community. While we were at DEF CON 32, security researcher Dennis Giese received a cease-and-desist letter on a Thursday afternoon for his talk scheduled just hours later for Friday morning. EFF lawyers met with Dennis almost immediately, and by Sunday, Dennis was able to give his talk. Here’s what happened, and why the fight for coders’ rights matters.

Throughout the year, we receive a number of inquiries from security researchers who seek to report vulnerabilities or present on technical exploits and want to understand the legal risks involved. Enter the EFF Coders’ Rights Project, designed to help programmers, tinkerers, and innovators who wish to responsibly explore technologies and report on those findings. Our Coders Rights lawyers counsel many of those who reach out to us on anything from mitigating legal risk in their talks, to reporting vulnerabilities they’ve found, to responding to legal threats. The number of inquiries often ramp up in the months leading to “hacker summer camp,” but we usually have at least a couple of weeks to help and advise the researcher.

In this case, however, we did our work on an extremely short schedule.

Dennis is a prolific researcher who has presented his work at conferences around the world. At DEF CON, one of the talks he planned along with a co-presenter involved digital locks, including the vendor Digilock. In the months leading up to the presentation, Dennis shared his findings with Digilock and sought to discuss potential remediations. Digilock expressed interest in these conversations, so it came as a surprise when the company sent him the cease-and-desist letter on the eve of the presentation raising a number of baseless legal claims.

Because we had lawyers on the ground at DEF CON, Dennis was able to connect with EFF soon after receiving the cease-and-desist and, along with former EFF attorney and current Special Counsel to EFF, Kurt Opsahl, we agreed to represent him in responding to Digilock. Over the course of forty-eight hours, we were able to meet with Digilock’s lawyers and ultimately facilitated a productive conversation between Dennis and its CEO.

Good-faith security researchers increase security for all of us.

To its credit, Digilock agreed to rescind the cease-and-desist letter and also provided Dennis with useful information about its plans to address vulnerabilities discussed in his research.

Dennis was able to give the talk, with this additional information, on Sunday, the last day of DEF CON.

We are proud we could help Dennis navigate what can be a scary situation of receiving last-minute legal threats, and are happy that he was ultimately able to give his talk. Good-faith security researchers like Dennis increase security for all of us who use digital devices. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work. If we do not know about security vulnerabilities, we cannot fix them, and we cannot make better computer systems in the future. Dennis’s research was not only legal, it demonstrated real world problems that the companies involved need to address.

Just as important as discovering security vulnerabilities is reporting the findings so that users can protect themselves, vendors can avoid introducing vulnerabilities in the future, and other security researchers can build off that information. By publicly explaining these sorts of attacks and proposing remedies, other companies that make similar devices can also benefit by fixing these vulnerabilities. In discovering and reporting on their findings, security researchers like Dennis help build a safer future for all of us.

However, this incident reminds us that even good faith hackers are often faced with legal challenges meant to silence them from publicly sharing the legitimate fruits of their labor. The Coders' Rights Project is part of our long standing work to protect researchers through legal defense, education, amicus briefs, and involvement in the community. Through it, we hope to promote innovation and safeguard the rights of curious tinkerers and hackers everywhere.

We must continue to fight for the right to share this research, which leads to better security for us all. If you are a security researcher in need of legal assistance or have concerns before giving a talk, do not hesitate to reach out to us. If you'd like to support more of this work, please consider donating to EFF.

Senators Expose Car Companies’ Terrible Data Privacy Practices

29 juillet 2024 à 17:55

In a letter to the Federal Trade Commission (FTC) last week, Senators Ron Wyden and Edward Markey urged the FTC to investigate several car companies caught selling and sharing customer information without clear consent. Alongside details previously gathered from reporting by The New York Times, the letter also showcases exactly how much this data is worth to the car companies selling this information.

Car companies collect a lot of data about driving behavior, ranging from how often you brake to how rapidly you accelerate. This data can then be sold off to a data broker or directly to an insurance company, where it’s used to calculate a driver’s riskiness, and adjust insurance rates accordingly. This surveillance is often defended by its promoters as a way to get discounts on insurance, but that rarely addresses the fact your insurance rates may actually go up.

If your car is connected to the internet or has an app, you may have inadvertently “agreed” to this type of data sharing when setting it up without realizing it. The Senators’ letter asserts that Hyundai shares drivers’ data  without seeking their informed consent, and that GM and Honda used deceptive practices during signup.

When it comes to the price that companies can get for selling your driving data, the numbers range wildly, but the data isn’t as valuable as you might imagine. The letter states that Honda sold the data on about 97,000 cars to an analytics company, Verisk—which turned around and sold the data to insurance companies—for $25,920, or 26 cents per car. Hyundai got a better deal, but still not astronomical numbers: Verisk paid Hyundai $1,043,315.69, or 61 cents per car. GM declined to share details about its sales.

The letter also reveals that while GM stopped sharing driving data after The New York Times’ investigation, it did not stop sharing location data, which it’s been sharing for years. GM collects and shares location data on every car that’s connected to the internet, and doesn’t offer a way to opt out beyond disabling internet-connectivity altogether. According to the letter, GM refused to name the company it’s sharing the location data with currently. While GM claims the location data is de-identified, there is no way to de-identify location data. With just one data point, where the car is parked most often, it becomes obvious where a person lives.

Car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom. This level of tracking is a nightmare on its own, and is made worse for certain kinds of vulnerable populations, such as survivors of domestic abuse.

The three automakers listed in the letter are certainly not the only ones sharing data without real consent, and it’s likely there are other data brokers who handle this type of data. The FTC should investigate this industry further, just as it has recently investigated many other industries that threaten data privacy. Moreover, Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent.

Celebrate Repair Independence Day!

Right-to-repair advocates have spent more than a decade working for a simple goal: to make sure you can fix and tinker with your own stuff. That should be true whether we’re talking about a car, a tractor, a smartphone, a computer, or really anything you buy. Yet product manufacturers have used the growing presence of software on devices to make nonsense arguments about why tinkering with your stuff violates their copyright.

Our years of hard work pushing for consumer rights to repair are paying off in a big way. Case in point: Today—July 1, 2024—two strong repair bills are now law in California and Minnesota. As Repair Association Executive Director Gay Gordon-Byrne said on EFF's podcast about right to repair, after doggedly chasing this goal for years, we caught the car!

Sometimes it's hard to know what to do after a long fight. But it's clear for the repair movement. Now is the time to celebrate! That's why EFF is joining our friends in the right to repair world by celebrating Repair Independence Day.

EFF is joining our friends in the right to repair world by celebrating Repair Independence Day.

There are a few ways to do this. You could grab your tools and fix that wonky key on your keyboard. You could take a cracked device to a local repair shop. Or you can read up on what your rights are. If you live in California or Minnesota—or in Colorado or New York, where right to repair laws are already in effect—and want to know what the repair laws in your state mean for you, check out this tip sheet from Repair.org.

And what if you're not in one of those states? We still have good news for you. We're all seeing the fruits of this labor of love, even in states where there aren't specific laws. Companies have heard, time and again, that people want to be able to fix their own stuff. As the movement gains more momentum, device manufacturers started to offer more repair-friendly programs: Kobo offering parts and guides, Microsoft selling parts for controllers, Google committing to offering spare parts for Pixels for seven years, and Apple offering some self-service repairs.  

It's encouraging to see companies respond to our demands for the right to repair, though laws such as those going into effect today make sure they can't roll back their promises. And, of course, the work is not done. Repair advocates have won incredible victories in California and Minnesota (with another good law in Oregon coming online next July). But there are a still lots of things you should be able to fix without interference that are not covered by these bills, such as tractors.

We can't let up, especially now that we're winning. But today, it's time to enjoy our hard-won victories. Happy Repair Independence Day!

How to Clean Up Your Bluesky Feed

In our recent comparison of Mastodon, Bluesky, and Threads, we detail a few of the ways the similar-at-a-glance microblogging social networks differ, and one of the main distinctions is how much control you have over what you see as a user. We’ve detailed how to get your Mastodon feed into shape before, and now it’s time to clean up your Bluesky feed. We’ll do this mostly through its moderation tools.

Currently, Bluesky is mostly a single experience that operates on one set of flagship services operated by the Bluesky corporation. As the AT Protocol expands and decentralizes, so will the variety of moderation and custom algorithmic feed options. But for the time being, we have Bluesky.

Bluesky’s current moderation filters operate on two levels: the default options built in the Bluesky app, and community created filters called “labelers”. The company’s default system includes options and company labelers which hide the sorts of things we’re all used to having restricted on social networks, like spam or adult content. It also includes defaults to hiding other categories like engagement farming and certain extremist views. Community options use Bluesky’s own moderation tool, Ozone, and are built exactly the same system as the company’s default ones; the only difference is which ones are built into the app. All this choice ends up being both powerful and overwhelming. So let’s walk through how to use it to make your Bluesky experience as good as possible.

Familiarize Yourself with Bluesky’s Moderation Tools

Bluesky offers several ways to control what appears in your feed: labeling and curation tools to hide (or warn about) the content of a post, and tools to block accounts from your feed entirely. Let’s start with customizing the content you see.

Get to Know Bluesky’s Built-In Settings

By default, Bluesky offers a basic moderation tool that allows you to show, hide, or warn about a range of content related to everything from topics like self-harm, extremist views, or intolerance, to more traditional content moderation like security concerns, scams, or inauthentic accounts.

This build-your-own filter approach is different from other social networks, which tend to control moderation on a platform level, leaving little up to the end user. This gives you control over what you see in your feed, but it’s also overwhelming to wrap your head around. We suggest popping into the moderation screen to see how it’s set up, and tweak any options you’d like:

Tap > Settings > Moderation > Bluesky Moderation Service to get to the settings. You can choose from three display options for each type of post: off (you’ll see it), warn (you’ll get a warning before you can view the post), or hide (you won’t see the post at all).

There’s no way currently to entirely opt out of Bluesky’s defaults, though the company does note that any separate client app (i.e., not the official Bluesky app) can set up its own rules. However, you can subscribe to custom label sets to layer on top of the Bluesky defaults. These labels are similar to the Block Together tool formerly supported by Twitter, and allow individual users or communities to create their own moderation filters. As with the default moderation options, you can choose to have anything that gets labeled hidden or see a warning if it’s flagged. These custom services can include all sorts of highly specific labels, like whether an image is suspected to be made with AI, includes content that may trigger phobias (like spiders), and more. There’s currently no way to easily search for these labeling services, but Bluesky notes a few here, and there’s a broad list here.

To enable one of these, search for the account name of a labeler, like “@xblock.aendra.dev” and then subscribe to it. Once you subscribe, you can toggle any labeling filters the account offers. If you decide you no longer want to use the service or you want to change the settings, you can do so on the same moderation page noted above.

Build Your Own Mute and Block Lists (or Subscribe to Others)

Custom moderation and labels don’t replace one of the most common tools in all of social media: the ability to block accounts entirely. Here, Bluesky offers something new with the old, though. Not only can you block and mute users, you can also subscribe to block lists published by other users, similar to tools like Block Party.

To mute or block someone, tap their user profile picture to get to their profile, then the three-dot icon, then choose to “Mute Account,” which makes it so they don’t appear in your feed, but they can still see yours, or “Block Account,” which makes it so they don’t appear in your feed and they can’t view yours. Note that a list of your Muted accounts is private, but your Blocked accounts are public. Anyone can see who you’ve blocked, but not who you’ve muted.

You can also use built-in algorithmic tools like muting specific words or phrases. Tap > Settings > Moderation and then tap “Mute words & tags.” Type in any word or phrase you want to mute, select whether to mute it if it appears “text & tags” or just in “tags only,” and then it’ll be hidden from your feed.

Users can also experiment with more elaborate algorithmic curation options, such as using tools like Blacksky to completely reshape your feed.

If all this manual work makes you tired, then mute lists might be the answer. These are curated lists made by other Bluesky users that mass mute accounts. These mute lists, unlike muted accounts, are public, though, so keep that in mind before you create or sign up for one.

As with community run moderation services, there’s not currently a great way to search for these lists. To sign up for mute list you’ll need to know the username of someone who has created a block or mute list that you want to use. Search for their profile, tap the “Lists” option from their profile page, tap the list you’re interested in, then “Subscribe.” Confusingly, from this screen, a “List” can be a feed you subscribe to of posts you want to see (like if someone made a list of “people who work at EFF,”) or a block or mute list. If it's referred to as a “user list” and has the option to “Pin to home,” then it’s a feed you can follow, otherwise it’s a mute or block list.

Clean Up Your Timeline

Is there some strange design decision in the app that makes you question why you use it? Perhaps you hate seeing reposts? Bluesky offers a few ways to choose how information is displayed in the app that can make it easier to use. These are essentially custom algorithms, which Bluesky calls “Feeds,” that filter and focus your content however you want.

Subscribe to (or Build Your Own) Custom Feeds

Unlike most social networks, Bluesky gives you control over the algorithm that displays content. By default, you’ll get a chronological feed, but you can pick and choose from other options using custom feeds. These let you tinker with your feed, create entirely new ones, and more. Custom feeds make it so you can look at a feed of very specific types of posts, like only mutuals (people who also follow you back), quiet posters (people who don’t post much), news organizations, or just photos of cats. Here, unlike with some of the other custom tools, Bluesky does at least provide a way to search for feeds to use.

Tap > Settings > Feeds. You’ll find a list of your current feeds here, and if you scroll down you’ll find a search bar to look for new ones. These can be as broad as “Posters in Japan,” to as focused as “Posts about Taylor Swift.” Once you pick a few, these custom feeds will appear at the top of your main timeline. If you ever want to rearrange what order these appear in, head back to the Feeds page, then tap the gear icon in the top-right to get to a screen where you can change the order. If you’re still struggling to find useful feeds, this search engine might help.

Customize How Replies Work, and Other Little Things in Your Feed

Bluesky has one last trick to making it a little nicer to use than other social networks, and that’s the amount of control you get over your main “following” feed. From your feed, tap the controls icon in the top right to get to the “Following Feed Preferences” page.

Here, you can do everything from hide replies to controlling what replies you do see (like only seeing replies to posts from people you follow, or only for posts with more than two replies). You can also hide reposts and quote posts, and even allow for posts from some of your custom feeds to get injected into your main feed. For example, if you enable the “Show Posts from My Feeds” option and you have subscribed to “Quiet Posters,” you’ll occasionally get a post from someone you follow outside of a strictly chronological time.

Final bonus tip: enable two-factor authentication: Bluesky rolled out email-based two-factor authentication well after many people signed up. If you’ve never looked at your settings, you probably never noticed this was offered. We suggest you turn it on to better secure your account. Head to > Settings, then scroll down to “Require email code to log into your account,” and enable it.

Phew, if that all felt a little overwhelming, that’s because it is. Sure, many people can sign up for Bluesky and never touch any of this stuff, but for those who want a safe, customizable experience, the whole thing feels a bit too crunchy in its current state. And while this sort of empowerment for users, which gives so many levers to control the content, is great, it’s also a lot. The good news is that Bluesky’s defaults are currently good enough to get started. But one of the benefits of community-based moderation like we see on Mastodon or certain Subreddits, is that volunteers do a lot of this heavy lifting for everyone. AT Protocol is still new however, and perhaps as more developers shape its future through new tools and services, these difficulties will be eased.

Surveillance Defense for Campus Protests

The recent wave of protests calling for peace in Palestine have been met with unwarranted and aggressive suppression from law enforcement, universities, and other bad actors. It’s clear that the changing role of surveillance on college campuses exacerbates the dangers faced by all of the communities colleges are meant to support, and only serves to suppress lawful speech. These harmful practices must come to an end, and until they do, activists should take precautions to protect themselves and their communities. There are no easy or universal answers, but here we outline some common considerations to help guide campus activists.

Protest Pocket Guide

How We Got Here

Over the past decade, many campuses have been building up their surveillance arsenal and inviting a greater police presence on campus. EFF and fellow privacy and speech advocates have been clear that this is a dangerous trend that chills free expression and makes students feel less safe, while fostering an adversarial and distrustful relationship with the administration.

Many tools used on campuses overlap with the street-level surveillance used by law enforcement, but universities are in a unique position of power over students being monitored. For students, universities are not just their school, but often their home, employer, healthcare provider, visa sponsor, place of worship, and much more. This reliance heightens the risks imposed by surveillance, and brings it into potentially every aspect of students’ lives.

Putting together a security plan is an essential first step to protect yourself from surveillance.

EFF has also been clear for years: as campuses build up their surveillance capabilities in the name of safety, they chill speech and foster a more adversarial relationship between students and the administration. Yet, this expansion has continued in recent years, especially after the COVID-19 lockdowns.

This came to a head in April, when groups across the U.S. pressured their universities to disclose and divest their financial interest in companies doing business in Israel and weapons manufacturers, and to distance themselves from ties to the defense industry. These protests echo similar campus divestment campaigns against the prison industry in 2015, and the campaign against apartheid South Africa in the 1980s. However, the current divestment movement has been met with disroportionate suppression and unprecedented digital surveillance from many universities.

This guide is written with those involved in protests in mind. Student journalists covering protests may also face digital threats and can refer to our previous guide to journalists covering protests.

Campus Security Planning

Putting together a security plan is an essential first step to protect yourself from surveillance. You can’t protect all information from everyone, and as a practical matter you probably wouldn’t want to. Instead, you want to identify what information is sensitive and who should and shouldn’t have access to it.

That means this plan will be very specific to your context and your own tolerance of risk from physical and psychological harm. For a more general walkthrough you can check out our Security Plan article on Surveillance Self-Defense. Here, we will walk through this process with prevalent concerns from current campus protests.

What do I want to protect?

Current university protests are a rapid and decentralized response to claims of genocide in Gaza, and to the reported humanitarian crisis in occupied East Jerusalem and the West Bank. Such movements will need to focus on secure communication, immediate safety at protests, and protection from collected data being used for retaliation—either at protests themselves or on social media.

At a protest, a mix of visible and invisible surveillance may be used to identify protesters. This can include administrators or law enforcement simply attending and keeping notes of what is said, but often digital recordings can make that same approach less plainly visible. This doesn't just include video and audio recordings—protesters may also be subject to tracking methods like face recognition technology and location tracking from their phone, school ID usage, or other sensors. So here, you want to be mindful of anything you say or anything on your person, which can reveal your identity or role in the protest, or those of fellow protestors.

This may also be paired with online surveillance. The university or police may monitor activity on social media, even joining private or closed groups to gather information. Of course, any services hosted by the university, such as email or WiFi networks, can also be monitored for activity. Again, taking care of what information is shared with whom is essential, including carefully separating public information (like the time of a rally) and private information (like your location when attending). Also keep in mind how what you say publicly, even in a moment of frustration, may be used to draw negative attention to yourself and undermine the cause.

However, many people may strategically use their position and identity publicly to lend credibility to a movement, such as a prominent author or alumnus. In doing so they should be mindful of those around them in more vulnerable positions.

Who do I want to protect it from?

Divestment challenges the financial underpinning of many institutions in higher education. The most immediate adversaries are clear: the university being pressured and the institutions being targeted for divestment.

However, many schools are escalating by inviting police on campus, sometimes as support for their existing campus police, making them yet another potential adversary. Pro-Palestine protests have drawn attention from some federal agencies, meaning law enforcement will inevitably be a potential surveillance adversary even when not invited by universities.

With any sensitive political issue, there are also people who will oppose your position. Others at the protest can escalate threats to safety, or try to intimidate and discredit those they disagree with. Private actors, whether individuals or groups, can weaponize surveillance tools available to consumers online or at a protest, even if it is as simple as video recording and doxxing attendees.

How bad are the consequences if I fail?

Failing to protect information can have a range of consequences that will depend on the institution and local law enforcement’s response. Some schools defused campus protests by agreeing to enter talks with protesters. Others opted to escalate tensions by having police dismantle encampments and having participants suspended, expelled, or arrested. Such disproportionate disciplinary actions put students at risk in myriad ways, depending how they relied on the institution. The extent to which institutions will attempt to chill speech with surveillance will vary, but unlike direct physical disruption, surveillance tools may be used with less hesitation.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights.

All interactions with law enforcement carry some risk, and will differ based on your identity and history of police interactions. This risk can be mitigated by knowing your rights and limiting your communication with police unless in the presence of an attorney. 

How likely is it that I will need to protect it?

Disproportionate disciplinary actions will often coincide with and be preceded by some form of surveillance. Even schools that are more accommodating of peace protests may engage in some level of monitoring, particularly schools that have already adopted surveillance tech. School devices, services, and networks are also easy targets, so try to use alternatives to these when possible. Stick to using personal devices and not university-administered ones for sensitive information, and adopt tools to limit monitoring, like Tor. Even banal systems like campus ID cards, presence monitors, class attendance monitoring, and wifi access points can create a record of student locations or tip off schools to people congregating. Online surveillance is also easy to implement by simply joining groups on social media, or even adopting commercial social media monitoring tools.

Schools that invite a police presence make their students and workers subject to the current practices of local law enforcement. Our resource, the Atlas of Surveillance, gives an idea of what technology local law enforcement is capable of using, and our Street-Level Surveillance hub breaks down the capabilities of each device. But other factors, like how well-resourced local law enforcement is, will determine the scale of the response. For example, if local law enforcement already have social media monitoring programs, they may use them on protesters at the request of the university.

Bad actors not directly affiliated with the university or law enforcement may be the most difficult factor to anticipate. These threats can arise from people who are physically present, such as onlookers or counter-protesters, and individuals who are offsite. Information about protesters can be turned against them for purposes of surveillance, harassment, or doxxing. Taking measures found in this guide will also be useful to protect yourself from this potentiality.

Finally, don’t confuse your rights with your safety. Even if you are in a context where assembly is legal and surveillance and suppression is not, be prepared for it to happen anyway. Legal protections are retrospective, so for your own safety, be prepared for adversaries willing to overstep these protections.

How much trouble am I willing to go through to try to prevent potential consequences?

There is no perfect answer to this question, and every individual protester has their own risks and considerations. In setting this boundary, it is important to communicate it with others and find workable solutions that meet people where they’re at. Being open and judgment-free in these discussions make the movement being built more consensual and less prone to abuses.  Centering consent in organizing can also help weed out bad actors in your own camp who will raise the risk for all who participate, deliberately or not.

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. 

Sometimes a surveillance self-defense tactic will invite new threats. Some universities and governments have been so eager to get images of protesters’ faces they have threatened criminal penalties on people wearing masks at gatherings. These new potential charges must now need to be weighed against the potential harms of face recognition technology, doxxing, and retribution someone may face by exposing their face.

Privacy is also a team sport. Investing a lot of energy in only your own personal surveillance defense may have diminishing returns, but making an effort to educate peers and adjust the norms of the movement puts less work on any one person has a potentially greater impact. Sharing resources in this post and the surveillance self-defense guides, and hosting your own workshops with the security education companion, are good first steps.

Who are my allies?

Cast a wide net of support; many members of faculty and staff may be able to provide forms of support to students, like institutional knowledge about school policies. Many school alumni are also invested in the reputation of their alma mater, and can bring outside knowledge and resources.

A number of non-profit organizations can also support protesters who face risks on campus. For example, many campus bail funds have been set up to support arrested protesters. The National Lawyers Guild has chapters across the U.S. that can offer Know Your Rights training and provide and train people to become legal observers (people who document a protest so that there is a clear legal record of civil liberties’ infringements should protesters face prosecution).

Many local solidarity groups may also be able to help provide trainings, street medics, and jail support. Many groups in EFF’s grassroots network, the Electronic Frontier Alliance, also offer free digital rights training and consultations.

Finally, EFF can help victims of surveillance directly when they email info@eff.org or Signal 510-243-8020. Even when EFF cannot take on your case, we have a wide network of attorneys and cybersecurity researchers who can offer support.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea.

Tips and Resources

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. To prevent tracking, your best option is to leave all your devices at home, but that’s not always possible, and makes communication and planning much more difficult. So, it’s useful to get an idea of what sorts of surveillance is feasible, and what you can do to prevent it. This is meant as a starting point, not a comprehensive summary of everything you may need to do or know:

Prepare yourself and your devices for protests

Our guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We have a handy printable version available here, too, that makes it easy to share with others.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea. Tell friends or family when you plan to attend and leave, so that if there are arrests or harassment they can follow up to make sure you are safe. If there may be arrests, make sure to have the phone number of an attorney and possibly coordinate with a jail support group.

Protect your online accounts

Doxxing, when someone exposes information about you, is a tactic reportedly being used on some protesters. This information is often found in public places, like "people search" sites and social media. Being doxxed can be overwhelming and difficult to control in the moment, but you can take some steps to manage it or at least prepare yourself for what information is available. To get started, check out this guide that the New York Times created to train its journalists how to dox themselves, and Pen America's Online Harassment Field Manual

Compartmentalize

Being deliberate about how and where information is shared can limit the impact of any one breach of privacy. Online, this might look like using different accounts for different purposes or preferring smaller Signal chats, and offline it might mean being deliberate about with whom information is shared, and bringing “clean” devices (without sensitive information) to protests.

Be mindful of potential student surveillance tools 

It’s difficult to track what tools each campus is using to track protesters, but it’s possible that colleges are using the same tricks they’ve used for monitoring students in the past alongside surveillance tools often used by campus police. One good rule of thumb: if a device, software, or an online account was provided by the school (like an .edu email address or test-taking monitoring software), then the school may be able to access what you do on it. Likewise, remember that if you use a corporate or university-controlled tool without end-to-end encryption for communication or collaboration, like online documents or email, content may be shared by the corporation or university with law enforcement when compelled with a warrant. 

Know your rights if you’re arrested: 

Thousands of students, staff, faculty, and community members have been arrested, but it’s important to remember that the vast majority of the people who have participated in street and campus demonstrations have not been arrested nor taken into custody. Nevertheless, be careful and know what to do if you’re arrested.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights, for example, refusing consent to a search of your devices, bags, vehicles, or home. Law enforcement can lie and pressure arrestees into saying things that are later used against them, so waiting until you have a lawyer before speaking is always the right call.

Barring a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Law enforcement may not respect your rights when they’re taking you into custody, but your lawyer and the courts can protect your rights later, especially if you assert them during the arrest and any time in custody.

How Political Campaigns Use Your Data to Target You

Data about potential voters—who they are, where they are, and how to reach them—is an extremely valuable commodity during an election year. And while the right to a secret ballot is a cornerstone of the democratic process, your personal information is gathered, used, and sold along the way. It's not possible to fully shield yourself from all this data processing, but you can take steps to at least minimize and understand it.

Political campaigns use the same invasive tricks that behavioral ads do—pulling in data from a variety of sources online to create a profile—so they can target you. Your digital trail is a critical tool for campaigns, but the process starts in the real world, where longstanding techniques to collect data about you can be useful indicators of how you'll vote. This starts with voter records.

Your IRL Voting Trail Is Still Valuable

Politicians have long had access to public data, like voter registration, party registration, address, and participation information (whether or not a voter voted, not who they voted for). Online access to such records has made them easier to get in some states, with unintended consequences, like doxing.

Campaigns can purchase this voter information from most states. These records provide a rough idea of whether that person will vote or not, and—if they're registered to a particular party—who they might lean toward voting for. Campaigns use this to put every voter into broad categories, like "supporter," "non-supporter," or "undecided." Campaigns gather such information at in-person events, too, like door-knocking and rallies, where you might sign up for emails or phone calls.

Campaigns also share information about you with other campaigns, so if you register with a candidate one year, it's likely that information goes to another in the future. For example, the website for Adam’s Schiff’s campaign to serve as U.S. Senator from California has a privacy policy with this line under “Sharing of Information”:

With organizations, candidates, campaigns, groups, or causes that we believe have similar political viewpoints, principles, or objectives or share similar goals and with organizations that facilitate communications and information sharing among such groups

Similar language can be found on other campaign sites, including those for Elizabeth Warren and Ted Cruz. These candidate lists are valuable, and are often shared within the national party. In 2017, the Hillary Clinton campaign gave its email list to the Democratic National Committee, a contribution valued at $3.5 million.

If you live in a state with citizen initiative ballot measures, data collected from signature sheets might be shared or used as well. Signing a petition doesn't necessarily mean you support the proposed ballot measure—it's just saying you think it deserves to be put on the ballot. But in most states, these signature pages will remain a part of the public record, and the information you provide may get used for mailings or other targeted political ads. 

How Those Voter Records, and Much More, Lead to Targeted Digital Ads

All that real world information is just one part of the puzzle these days. Political campaigns tap into the same intrusive adtech tracking systems used to deliver online behavioral ads. We saw a glimpse into how this worked after the Cambridge Analytica scandal, and the system has only grown since then.

Specific details are often a mystery, as a political advertising profile may be created by combining disparate information—from consumer scoring data brokers like Acxiom or Experian, smartphone data, and publicly available voter information—into a jumble of data points that’s often hard to trace in any meaningful way. A simplified version of the whole process might go something like this:

  1. A campaign starts with its voter list, which includes names, addresses, and party affiliation. It may have purchased this from the state or its own national committee, or collected some of it for itself through a website or app.
  2. The campaign then turns to a data broker to enhance this list with consumer information. The data broker combines the voter list with its own data, then creates a behavioral profile using inferences based on your shopping, hobbies, demographics, and more. The campaign looks this all over, then chooses some categories of people it thinks will be receptive to its messages in its various targeted ads.
  3. Finally, the campaign turns to an ad targeting company to get the ad on your device. Some ad companies might use an IP address to target the ad to you. As The Markup revealed, other companies might target you based on your phone's location, which is particularly useful in reaching voters not in the campaign's files. 

In 2020, Open Secrets found political groups paid 37 different data brokers at least $23 million for access to services or data. These data brokers collect information from browser cookies, web beacons, mobile phones, social media platforms, and more. They found that some companies specialize in more general data, while others, like i360, TargetSmart, and Grassroots Analytics, focus on data useful to campaigns or advocacy.

screenshot of spreadsheet with categories, "Qanon, Rightwing Militias, Right to Repair, Inflation Fault, Electric Vehicle Buyer, Climate Change, and Amazon Worker Treatment"

A sample of some categories and inferences in a political data broker file that we received through a CCPA request shows the wide variety of assumptions these companies may make.

These political data brokers make a lot of promises to campaigns. TargetSmart claims to have 171 million highly accurate cell phone numbers, and i360 claims to have data on 220 million voters. They also tend to offer specialized campaign categories that go beyond the offerings of consumer-focused data brokers. Check out data broker L2’s “National Models & Predictive Analytics” page, which breaks down interests, demographics, and political ideology—including details like "Voter Fraud Belief," and "Ukraine Continue." The New York Times demonstrated a particularly novel approach to these sorts of profiles where a voter analytics firm created a “Covid concern score” by analyzing cell phone location, then ranked people based on travel patterns during the pandemic.

Some of these companies target based on location data. For example, El Toro claims to have once “identified over 130,000 IP-matched voter homes that met the client’s targeting criteria. El Toro served banner and video advertisements up to 3 times per day, per voter household – across all devices within the home.”

That “all devices within the home” claim may prove important in the coming elections: as streaming video services integrate more ad-based subscription tiers, that likely means more political ads this year. One company, AdImpact, projects $1.3 billion in political ad spending on “connected television” ads in 2024. This may be driven in part by the move away from tracking cookies, which makes web browsing data less appealing.

In the case of connected televisions, ads can also integrate data based on what you've watched, using information collected through automated content recognition (ACR). Streaming device maker and service provider Roku's pitch to potential political advertisers is straightforward: “there’s an opportunity for campaigns to use their own data like never before, for instance to reach households in a particular district where they need to get out the vote.” Roku claims to have at least 80 million users. As a platform for televisions and “streaming sticks,” and especially if you opted into ACR (we’ll detail how to check below), Roku can collect and use a lot of your viewing data ranging from apps, to broadcast TV, or even to video games.

This is vastly different from traditional broadcast TV ads, which might be targeted broadly based on a city or state, and the show being aired. Now, a campaign can target an ad at one household, but not their neighbor, even if they're watching the same show. Of the main streaming companies, only Amazon and Netflix don’t accept political ads.

Finally, there are Facebook and Google, two companies that have amassed a mountain of data points about all their users, and which allow campaigns to target based on some of those factors. According to at least one report, political ad spending on Google (mostly through YouTube) is projected to be $552 million, while Facebook is projected at $568 million. Unlike the data brokers discussed above, most of what you see on Facebook and Google is derived from the data collected by the company from its users. This may make it easier to understand why you’re seeing a political ad, for example, if you follow or view content from a specific politician or party, or about a specific political topic.

What You Can Do to Protect Your Privacy

Managing the flow of all this data might feel impossible, but you can take a few important steps to minimize what’s out there. The chances you’ll catch everything is low, but minimizing what is accessible is still a privacy win.

Install Privacy Badger
Considering how much data is collected just from your day-to-day web browsing, it’s a good idea to protect that first. The simplest way to do so is with our own tracking blocker extension, Privacy Badger.

Disable Your Phone Advertising ID and Audit Your Location Settings
Your phone has an ad identifier that makes it simple for advertisers to track and collate everything you do. Thankfully, you can make this much harder for those advertisers by disabling it:

  • On iPhone: Head into Settings > Privacy & Security > Tracking, and make sure “Allow Apps to Request to Track” is disabled. 
  • On Android: Open Settings > Security & Privacy > Privacy > Ads, and select “Delete advertising ID.”

Similarly, as noted above, your location is a valuable asset for campaigns. They can collect your location through data brokers, which usually get it from otherwise unaffiliated apps. This is why it's a good idea to limit what sorts of apps have access to your location:

  • On iPhone: open Settings > Privacy & Security > Location Services, and disable access for any apps that do not need it. You can also set location for only "While using," for certain apps where it's helpful, but unnecessary to track you all the time. Also, consider disabling "Precise Location" for any apps that don't need your exact location (for example, your GPS navigation app needs precise location, but no weather app does).
  • On Android: Open Settings > Location > App location permissions, and confirm that no apps are accessing your location that you don't want to. As with iOS, you can set it to "Allow only while using the app," for apps that don't need it all the time, and disable "Use precise location," for any apps that don't need exact location access.

Opt Out of Tracking on Your TV or Streaming Device, and Any Video Streaming Service
Nearly every brand of TV is connected to the internet these days. Consumer Reports has a guide for disabling what you can on most popular TVs and software platforms. If you use an Apple TV, you can disable the ad identifier following the exact same directions as on your phone.

Since the passage of a number of state privacy laws, streaming services, like other sites, have offered a way for users to opt out of the sale of their info. Many have extended this right outside of states that require it. You'll need to be logged into your streaming service account to take action on most of these, but TechHive has a list of opt out links for popular streaming services to get you started. Select the "Right to Opt Out" option, when offered.

Don't Click on Links in (or Respond to) Political Text Messages
You've likely been receiving political texts for much of the past year, and that's not going to let up until election day. It is increasingly difficult to decipher whether they're legitimate or spam, and with links that often use a URL shortener or odd looking domains, it's best not to click them. If there's a campaign you want to donate to, head directly to the site of the candidate or ballot sponsor.

Create an Alternate Email and Phone Number for Campaign Stuff
If you want to keep updated on campaign or ballot initiatives, consider setting up an email specifically for that, and nothing else. Since a phone number is also often required, it's a good idea to set up a secondary phone number for these same purposes (you can do so for free through services like Google Voice).

Keep an Eye Out for Deceptive Check Boxes
Speaking of signing up for updates, be mindful of when you don't intend to sign up for emails. Campaigns might use pre-selected options for everything from donation amounts to signing up for a newsletter. So, when you sign up with any campaign, keep an eye on any options you might not intend to opt into.

Mind Your Social Media
Now's a great time to take any sort of "privacy checkup" available on whatever social media platforms you use to help minimize any accidental data sharing. Even though you can't completely opt out of behavioral advertising on Facebook, review your ad preferences and opt out whatever you can. Also be sure to disable access to off-site activity. You should also opt out of personalized ads on Google's services. You cannot disable behavioral ads on TikTok, but the company doesn't allow political ads.

If you're curious to learn more about why you're seeing an ad to begin with, on Facebook you can always click the three-dot icon on an ad, then click "Why am I seeing this ad?" to learn more. For ads on YouTube, you can click the "More" button and then "About this advertiser" to see some information about who placed the ad. Anywhere else you see a Google ad you can click the "Adchoices" button and then "Why this ad?"

You shouldn't need to spend an afternoon jumping through opt out hoops and tweaking privacy settings on every device you own just so you're not bombarded with highly targeted ads. That’s why EFF supports comprehensive consumer data privacy legislation, including a ban on online behavioral ads.

Democracy works because we participate, and you should be able to do so without sacrificing your privacy. 

How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can)

Cars collect a lot of our personal data, and car companies disclose a lot of that data to third parties. It’s often unclear what’s being collected, and what's being shared and with whom. A recent New York Times article highlighted how data is shared by G.M. with insurance companies, sometimes without clear knowledge from the driver. If you're curious about what your car knows about you, you might be able to find out. In some cases, you may even be able to opt out of some of that sharing of data.

Why Your Car Collects and Shares Data

A car (and its app, if you installed one on your phone) can collect all sorts of data in the background with and without you realizing it. This in turn may be shared for a wide variety of purposes, including advertising and risk-assessment for insurance companies. The list of data collected is long and dependent on the car’s make, model, and trim.  But if you look through any car maker’s privacy policy, you'll see some trends:

  • Diagnostics data, sometimes referred to as “vehicle health data,” may be used internally for quality assurance, research, recall tracking, service issues, and similar unsurprising car-related purposes. This type of data may also be shared with dealers or repair companies for service.
  • Location information may be collected for emergency services, mapping, and to catalog other environmental information about where a car is operated. Some cars may give you access to the vehicle’s location in the app.
  • Some usage data may be shared or used internally for advertising. Your daily driving or car maintenance habits, alongside location data, is a valuable asset to the targeted advertising ecosystem. 
  • All of this data could be shared with law enforcement.
  • Information about your driving habits, sometimes referred to as “Driving data” or “Driver behavior information,” may be shared with insurance companies and used to alter your premiums.  This can range from odometer readings to braking and acceleration statistics and even data about what time of day you drive.. 

Surprise insurance sharing is the thrust of The New York Times article, and certainly not the only problem with car data. We've written previously about how insurance companies offer discounts for customers who opt into a usage-based insurance program. Every state except California currently allows the use of telematics data for insurance rating, but privacy protections for this data vary widely across states.

When you sign up directly through an insurer, these opt-in insurance programs have a pretty clear tradeoff and sign up processes, and they'll likely send you a physical device that you plug into your car's OBD port that then collects and transmits data back to the insurer.

But some cars have their own internal systems for sharing information with insurance companies that can piggy back off an app you may have installed, or the car’s own internet connection. Many of these programs operate behind dense legalese. You may have accidentally “agreed” to such sharing without realizing it, while buying a new car—likely in a state of exhaustion and excitement after finally completing a gauntlet of finance and legal forms.

This gets more confusing: car-makers use different terms for their insurance sharing programs. Some, like Toyota's “Insure Connect,” are pretty obviously named. But others, like Honda, tuck information about sharing with a data broker (that then shares with insurance companies) inside a privacy policy after you enable its “Driver Feedback” feature. Others might include the insurance sharing opt-in alongside broader services you might associate more with safety or theft, like G.M.’s OnStar, Subaru’s Starlink, and Volkswagen’s Car-Net.

The amount of data shared differs by company, too. Some car makers might share only small amounts of data, like an odometer reading, while others might share specific details about driving habits.

That's just the insurance data sharing. There's little doubt that many cars sell other data for behavioral advertising, and like the rest of that industry, it's nearly impossible to track exactly where your data goes and how it's used.

See What Data Your Car Has (and Stop the Sharing)

This is a general guide to see what your car collects and who it shares it with. It does not include information about specific scenarios—like intimate partner violence— that may raise distinctive driver privacy issues.

See How Your Car Handles (Data)
Start by seeing what your car is equipped to collect using Privacy4Cars’ Vehicle Privacy Report. Once you enter your car’s VIN, the site provides a rough idea of what sorts of data your car collects. It's also worth reading about your car manufacturer’s more general practices on Mozilla's Privacy Not Included site.

Check the Privacy Options In Your Car’s Apps and Infotainment System
If you use an app for your car, head into the app’s settings, and look for any sort of data sharing options. Look for settings like “Data Privacy” or “Data Usage.” When possible, opt out of sharing any data with third-parties, or for behavioral advertising. As annoying as it may be, it’s important to read carefully here so you don’t accidentally disable something you want, like a car’s SOS feature. Be mindful that, at least according to Mozilla’s report on Tesla, opting out of certain data sharing might someday make the car undriveable. Now’s also a good time to disable ad tracking on your phone.

When it comes to sharing with insurance companies, you’re looking for an option that may be something obvious, like Toyota’s “Insure Connect,” or less obvious, like Kia’s “Driving Score.” If your car’s app has any sort of driver scoring or feedback option—some other names include GM’s ”Smart Driver,” Honda’s “Driver Feedback,” or Mitsubishi’s “Driving Score”—there’s a chance it’s sharing that data with an insurance company. Check for these options in both the app and the car’s infotainment system.

If you did accidentally sign up for sharing data with insurance companies, you may want to call your insurance company to see how doing so may affect your premiums. Depending on your driving habits, your premiums might go up or down, and in either case you don’t want a surprise bill.

File a Privacy Request with the Car Maker
Next, file a privacy request with the car manufacturer so you can see exactly what data the company has collected about you. Some car makers will provide this to anyone who asks. Others might only respond to requests from residents of states with a consumer data privacy law that requires their response. The International Association of Privacy Professionals has published this list of states with such laws.

In these states, you have a “right to know” or “right to access” your data, which requires the company to send you a copy of what personal information it collected about you. Some of these states also guarantee “data portability,” meaning the right to access your data in a machine-readable format. File one of these requests, and you should receive a copy of your data. In some states, you can also file a request for the car maker to not sell or share your information, or to delete it. While the car maker might not be legally required to respond to your request if you're not from a state with these privacy rights, it doesn’t hurt to ask anyway.

Every company tends to word these requests a little differently, but you’re looking for options to get a copy of your data, and ask them to stop sharing it. This typically requires filling out a separate request form for each type of request.

Here are the privacy request pages for the major car brands:

Sometimes, you will need to confirm the request in an email, so be sure to keep an eye on your inbox.

Check for Data On Popular Data Brokers Known to Share with Insurers
Finally, request your data from data brokers known to hand car data to insurers. For example, do so with the two companies mentioned in The New York Times’ article: 

Now, you wait. In most states, within 45 to 90 days you should receive an email from the car maker, and another from the data brokers, which will often include a link to your data. You will typically get a CSV file, though it may also be a PDF, XLS, or even a folder with a whole webpage and an HTML file. If you don't have any sort of spreadsheet software on your computer, you might struggle to open it up, but most of the files you get can be opened in free programs, like Google Sheets or LibreOffice.

Without a national law that puts privacy first, there is little that most people can do to stop this sort of data sharing. Moreover, the steps above clearly require far too much effort for most people to take. That’s why we need much more than these consumer rights to know, to delete, and to opt-out of disclosure: we also need laws that automatically require corporations to minimize the data they process about us, and to get our opt-in consent before processing our data. As to car insurers, we've outlined exactly what sort of guardrails we'd like to see here

As The New York Times' reporting revealed, many people were surprised to learn how their data is collected, disclosed, and used, even if there was an opt-in consent screen. This is a clear indication that car makers need to do better. 

Four Infosec Tools for Resistance this International Women’s Day 

While online violence is alarmingly common globally, women are often more likely to be the target of mass online attacks, nonconsensual leaks of sensitive information and content, and other forms of online violence. 

This International Women’s Day, visit EFF’s Surveillance Self-Defense (SSD) to learn how to defend yourself and your friends from surveillance. In addition to tutorials for installing and using security-friendly software, SSD walks you through concepts like making a security plan, the importance of strong passwords, and protecting metadata.

1. Make Your Own Security Plan

This IWD, learn what a security plan looks like and how you can build one. Trying to protect your online data—like pictures, private messages, or documents—from everything all the time is impractical and exhausting. But, have no fear! Security is a process, and through thoughtful planning, you can put together a plan that’s best for you. Security isn’t just about the tools you use or the software you download. It begins with understanding the unique threats you face and how you can counter those threats. 

2. Protect Yourself on Social Networks

Depending on your circumstances, you may need to protect yourself against the social network itself, against other users of the site, or both. Social networks are among the most popular websites on the internet. Facebook, TikTok, and Instagram each have over a billion users. Social networks were generally built on the idea of sharing posts, photographs, and personal information. They have also become forums for organizing and speaking. Any of these activities can rely on privacy and pseudonymity. Visit our SSD guide to learn how to protect yourself.

3. Tips for Attending Protests

Keep yourself, your devices, and your community safe while you make your voice heard. Now, more than ever, people must be able to hold those in power accountable and inspire others through the act of protest. Protecting your electronic devices and digital assets before, during, and after a protest is vital to keeping yourself and your information safe, as well as getting your message out. Theft, damage, confiscation, or forced deletion of media can disrupt your ability to publish your experiences, and those engaging in protest may be subject to search or arrest, or have their movements and associations surveilled. 

4. Communicate Securely with Signal or WhatsApp

Everything you say in a chat app should be private, viewable by only you and the person you're talking with. But that's not how all chats or DMs work. Most of those communication tools aren't end-to-end encrypted, and that means that the company who runs that software could view your chats, or hand over transcripts to law enforcement. That's why it's best to use a chat app like Signal any time you can. Signal uses end-to-end encryption, which means that nobody, not even Signal, can see the contents of your chats. Of course, you can't necessarily force everyone you know to use the communication tool of your choice, but thankfully other popular tools, like Apple's Messages, WhatsApp and more recently, Facebook's Messenger, all use end-to-end encryption too, as long as you're communicating with others on those same platforms. The more people who use these tools, even for innocuous conversations, the better.

On International Women’s Day and every day, stay safe out there! Surveillance self-defense can help.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Celebrating 15 Years of Surveillance Self-Defense

On March 3rd, 2009, we launched Surveillance Self-Defense (SSD). At the time, we pitched it as, "an online how-to guide for protecting your private data against government spying." In the last decade hundreds of people have contributed to SSD, over 20 million people have read it, and the content has nearly doubled in length from 40,000 words to almost 80,000. SSD has served as inspiration for many other guides focused on keeping specific populations safe, and those guides have in turn affected how we've approached SSD. A lot has changed in the world over the last 15 years, and SSD has changed with it. 

The Year Is 2009

Let's take a minute to travel back in time to the initial announcement of SSD. Launched with the support of the Open Society Institute, and written entirely by just a few people, we detailed exactly what our intentions were with SSD at the start:

EFF created the Surveillance Self-Defense site to educate Americans about the law and technology of communications surveillance and computer searches and seizures, and to provide the information and tools necessary to keep their private data out of the government's hands… The Surveillance Self-Defense project offers citizens a legal and technical toolkit with tips on how to defend themselves in case the government attempts to search, seize, subpoena or spy on their most private data.

screenshot of SSD in 2009, with a red logo and a block of text

SSD's design when it first launched in 2009.

To put this further into context, it's worth looking at where we were in 2009. Avatar was the top grossing movie of the year. Barack Obama was in his first term as president in the U.S. In a then-novel approach, Iranians turned to Twitter to organize protests. The NSA has a long history of spying on Americans, but we hadn't gotten to Jewel v. NSA or the Snowden revelations yet. And while the iPhone had been around for two years, it hadn't seen its first big privacy controversy yet (that would come in December of that year, but it'd be another year still before we hit the "your apps are watching you" stage).

Most importantly, in 2009 it was more complicated to keep your data secure than it is today. HTTPS wasn't common, using Tor required more technical know-how than it does nowadays, encrypted IMs were the fastest way to communicate securely, and full-disk encryption wasn't a common feature on smartphones. Even for computers, disk encryption required special software and knowledge to implement (not to mention time, solid state drives were still extremely expensive in 2009, so most people still had spinning disk hard drives, which took ages to encrypt and usually slowed down your computer significantly).

And thus, SSD in 2009 focused heavily on law enforcement and government access with its advice. Not long after the launch in 2009, in the midst of the Iranian uprising, we launched the international version, which focused on the concerns of individuals struggling to preserve their right to free expression in authoritarian regimes.

And that's where SSD stood, mostly as-is, for about six years. 

The Redesigns

In 2014, we redesigned and relaunched SSD with support from the Ford Foundation. The relaunch had at least 80 people involved in the writing, reviewing, design, and translation process. With the relaunch, there was also a shift in the mission as the threats expanded from just the government, to corporate and personal risks as well. From the press release:

"Everyone has something to protect, whether it's from the government or stalkers or data-miners," said EFF International Director Danny O'Brien. "Surveillance Self-Defense will help you think through your personal risk factors and concerns—is it an authoritarian government you need to worry about, or an ex-spouse, or your employer?—and guide you to appropriate tools and practices based on your specific situation."

SSD screenshot from 2014, with a logo with two keys, crossed and a block of text

2014 proved to be an effective year for a major update. After the murders of Michael Brown and Eric Garner, protestors hit the streets across the U.S., which made our protest guide particularly useful. There were also major security vulnerabilities that year, like Heartbleed, which caused all sorts of security issues for website operators and their visitors, and Shellshock, which opened up everything from servers to cameras to bug exploits, ushering in what felt like an endless stream of software updates on everything with a computer chip in it. And of course, there was still fallout from the Snowden leaks in 2013.

In 2018 we did another redesign, and added a new logo for SSD that came along with EFF's new design. This is more or less the same design of the site today.

SSD's current design, with an infinity logo wrapped around a lock and key

SSD's current design, which further clarifies what sections a guide is in, and expands the security scenarios.

Perhaps the most notable difference between this iteration of SSD and the years before is the lack of detailed reasoning explaining the need for its existence on the front page. No longer was it necessary to explain why we all need to practice surveillance self-defense. Online surveillance had gone mainstream.

Shifting Language Over the Years

As the years passed and the site was redesigned, we also shifted how we talked about security. In 2009 we wrote about security with terms like, "adversaries," "defensive technology," "threat models," and "assets." These were all common cybersecurity terms at the time, but made security sound like a military exercise, which often disenfranchised the very people who needed help. For example, in the later part of the 2010s, we reworked the idea of "threat modeling," when we published Your Security Plan. This was meant to be less intimidating and more inclusive of the various types of risks that people face.

The advice in SSD has changed over the years, too. Take passwords as an example, where in 2009 we said, "Although we recommend memorizing your passwords, we recognize you probably won't." First off, rude! Second off, maybe that could fly with the lower number of accounts we all had back in 2009, but nowadays nobody is going to remember hundreds of passwords. And regardless, that seems pretty dang impossible when paired with the final bit of advice, "You should change passwords every week, every month, or every year — it all depends on the threat, the risk, and the value of the asset, traded against usability and convenience."

Moving onto 2015, we phrased this same sentiment much differently, "Reusing passwords is an exceptionally bad security practice, because if an attacker gets hold of one password, she will often try using that password on various accounts belonging to the same person… Avoiding password reuse is a valuable security precaution, but you won't be able to remember all your passwords if each one is different. Fortunately, there are software tools to help with this—a password manager."

Well, that's much more polite!

Since then, we've toned that down even more, "Reusing passwords is a dangerous security practice. If someone gets ahold of your password —whether that's from a data breach, or wherever else—they can often gain access to any other account you used that same password. The solution is to use unique passwords everywhere and take additional steps to secure your accounts when possible."

Security is an always evolving process, so too is how we talk about it. But the more people we bring on board, the better it is for everyone. How we talk about surveillance self-defense will assuredly continue to adapt in the future.

Shifting Language(s) Over the Years

Initially in 2009, SSD was only available in English, and soon after launch, in Bulgarian. In the 2014 re-launch, we added Arabic and Spanish. Then added French, Thai, Vietnamese, and Urdu in 2015. Later that year, we added a handful of Amharic translations, too. This was accomplished through a web of people in dozens of countries who volunteered to translate and review everything. Many of these translations were done for highly specific reasons. For example, we had a Google Policy Fellow, Endalk Chala, who was part of the Zone 9 bloggers in Ethiopia. He translated everything into Amharic as he was fighting for his colleagues and friends who were imprisoned in Ethiopia on terrorism charges.

By 2019, we were translating most of SSD into at least 10 languages: Amharic, Arabic, Spanish, French, Russian, Turkish, Vietnamese, Brazilian Portuguese, Thai, and Urdu (as well as additional, externally-hosted community translations in Indonesian Bahasa, Burmese, Traditional Chinese, Igbo, Khmer, Swahili, Yoruba, and Twi).

Currently, we're focusing on getting the entirety of SSD re-translated into seven languages, then focusing our efforts on translating specific guides into other languages. 

Always Updating

Since 2009, we've done our best to review and update the guides in SSD. This has included minor changes to respond to news events, depreciating guides completely when they're no longer applicable in modern security plans, and massive rewrites when technology has changed.

The original version of SSD was launched mostly as a static text (we even offered a printer-friendly version), though updates and revisions did occur, they were not publicly tracked as clearly as they are today. In its early years, SSD was able to provide useful guidance across a number of important events, like Occupy Wall Street, before the major site redesign in 2014, which helped it become more useful training activists, including for Ferguson and Standing Rock, amongst others. The ability to update SSD along with changing trends and needs has ensured it can always be useful as a resource.

That redesign also better facilitated the updates process. The site became easier to navigate and use, and easier to update. For example, in 2017 we took on a round of guide audits in response to concerns following the 2016 election. In 2019 we continued that process with around seven major updates to SSD, and in 2020, we did five. We don't have great stats for 2021 and 2022, but in 2023 we managed 14 major updates or new guides. We're hoping to have the majority of SSD reviewed and revamped by the end of this year, with a handful of expansions along the way.

Which brings us to the future of SSD. We will continue updating, adapting, and adding to SSD in the coming years. It is often impossible to know what will be needed, but rest assured we'll be there to answer that whenever we can. As mentioned above, this includes getting more translations underway, and continuing to ensure that everything is accurate and up-to-date so SSD can remain one of the best repositories of security information available online.

We hope you’ll join EFF in celebrating 15 years of SSD!

Surveillance Self-Defense: 2023 Year in Review

26 décembre 2023 à 10:20

It's been a big year for Surveillance Self-Defense (SSD), our repository of self-help resources for helping better protect you and your friends from online spying. We've done a number of updates and tackled a few new emerging topics with blog posts.

Fighting for digital security and privacy rights is important, but sometimes we all just need to know what steps we can take to minimize spying, and when steps aren't possible, explaining how things work to help keep you safe. To do this, we break SSD into four sections:

  • Basics: A starter resource that includes overviews of how digital surveillance works.
  • Tool Guides: Step-by-step tutorials on using privacy and security tools.
  • Further Learning: Explainers about protecting your digital privacy.
  • Security Scenarios: Playlists of our resources for specific use cases, such as LGBTQ+ youth, journalists, activists, and more.

But not everything makes sense in SSD, so sometimes we also tackle security education issues with blogs, which tend to focus more on news events or new technology that may not have rolled out widely yet. Each has its place, and each saw a variety of new guidance this year.

Re-tooling Our SSD Tool Guides

Surveillance Self-Defense has provided expert guidance for security and privacy for 14 years. And in those years it has seen a number of revisions, expansions, and changes. We try to consistently audit and update SSD so it contains up to date information. Each guide has a "last reviewed" date so you can quickly see at the start when it last got an expert review.

This year we tackled a number of updates, and took the time to take a new approach with two of our most popular guides: Signal and WhatsApp. For these, we combined the once-separate Android and iPhone guides into one, making them easier to update (and translate) in the future.

We also updated many other guides this year with new information, screenshots, and advice:

SSD also received two new guides. The first was a new guide for choosing a password manager, one of the most important security tools, and one that can be overwhelming to research and start using. The second was a guide for using Tor on mobile devices, which is an increasingly useful place to use the privacy-protecting software.

Providing New Guidance and Responding to News

Part of security education is explaining new and old technologies, responding to news events, and laying out details of any technological quirks we find. For this, we tend to turn to our blog instead of SSD. But the core idea is the same: provide self-help guidance for navigating various security and privacy concerns.

We came up with guidance for passkeys, a new type of login that eliminates the need for passwords altogether. Passkeys can be confusing, both from a security perspective and from a basic usability perspective. We do think there's work that can be done to improve them, and like most security advice, the answer to the question of whether you should use them is "it depends." But for many people, if you’re not already using a password manager, passkeys will be a tremendous increase in security.

When it comes to quirks in apps, we took a look at what happens when you delete a replied-to message in encrypted messaging apps. There are all sorts of little oddities with end-to-end encrypted messaging apps that are worth being aware of. While they don't compromise the integrity of the messaging—your communications are safe from the companies that run them—they can sometimes act unexpectedly, like keeping a message you deleted around longer than you may realize if someone in the chat replied to it directly.

The DNA site 23andMe suffered a “credential stuffing” attack that resulted in 6.9 million user's data appearing on hacker forums. There were only a relatively small number of accounts actually compromised, but once in, the attacker was able to scrape information about other users using a feature known as DNA Relatives, which provided users with an expansive family tree. There's nothing you can do after this if your data was included, but we explained what happened, and the handful of steps you could take to better secure your account and make it more private in the future.

Google released its "Privacy Sandbox" feature, which, while improved from initial proposals back in 2019, still tracks your internet use for behavioral advertising by using your web browsing to define "topics" of interest, then queuing up ads based on those interests. The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Our blog shows you how to disable it, if you choose to.

We also took a deep dive into an Android tablet meant for kids that turned out to be filled with sketchyware. The tablet was riddled with all sorts of software we didn't like, but we shared guidance for how to better secure an Android tablet—all steps worth taking before you hand over any Android tablet as a holiday gift.

After a hard fought battle pushing Apple to encrypt iCloud backups, the company actually took it a step further, allowing you to encrypt nearly everything in iCloud, including those backups, with a new feature they call Advanced Data Protection. Unfortunately, it's not the default setting, so you should enable it for yourself as soon as you can.

Similarly, Meta finally rolled out end-to-end encryption for Messenger, which is thankfully enabled by default, though there are some quirks with how backups work that we explain in this blog post.

EFF worked hard in 2023 to explain new consumer security technologies, provide guidance for tools, and help everyone communicate securely. There's plenty more work to be done next year, and we'll be here to explain what you can, how to do it, and how it works in 2024.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training

12 décembre 2023 à 13:19

Both OpenAI and Google have released guidance for website owners who do not want the two companies using the content of their sites to train the company's large language models (LLMs). We've long been supporters of the right to scrape websites—the process of using a computer to load and read pages of a website for later analysis—as a tool for research, journalism, and archivers. We believe this practice is still lawful when collecting training data for generative AI, but the question of whether something should be illegal is different from whether it may be considered rude, gauche, or unpleasant. As norms continue to develop around what kinds of scraping and what uses of scraped data are considered acceptable, it is useful to have a tool for website operators to automatically signal their preference to crawlers. Asking OpenAI and Google (and anyone else who chooses to honor the preference) to not include scrapes of your site in its models is an easy process as long as you can access your site's file structure.

We've talked before about how these models use art for training, and the general idea and process is the same for text. Researchers have long used collections of data scraped from the internet for studies of censorship, malware, sociology, language, and other applications, including generative AI. Today, both academic and for-profit researchers collect training data for AI using bots that go out searching all over the web and “scrape up” or store the content of each site they come across. This might be used to create purely text-based tools, or a system might collect images that may be associated with certain text and try to glean connections between the words and the images during training. The end result, at least currently, is the chatbots we've seen in the form of Google Bard and ChatGPT.

It would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, to announce that they'd respect similar requests.

If you do not want your website's content used for this training, you can ask the bots deployed by Google and Open AI to skip over your site. Keep in mind that this only applies to future scraping. If Google or OpenAI already have data from your site, they will not remove it. It also doesn't stop the countless other companies out there training their own LLMs, and doesn't affect anything you've posted elsewhere, like on social networks or forums. It also wouldn't stop models that are trained on large data sets of scraped websites that aren't affiliated with a specific company. For example, OpenAI's GPT-3 and Meta's LLaMa were both trained using data mostly collected from Common Crawl, an open source archive of large portions of the internet that is routinely used for important research. You can block Common Crawl, but doing so blocks the web crawler from using your data in all its data sets, many of which have nothing to do with AI.

There's no technical requirement that a bot obey your requests. Currently only Google and OpenAI who have announced that this is the way to opt-out, so other AI companies may not care about this at all, or may add their own directions for opting out. But it also doesn't block any other types of scraping that are used for research or for other means, so if you're generally in favor of scraping but uneasy with the use of your website content in a corporation's AI training set, this is one step you can take.

Before we get to the how, we need to explain what exactly you'll be editing to do this.

What's a Robots.txt?

In order to ask these companies not to scrape your site, you need to edit (or create) a file located on your website called "robots.txt." A robots.txt is a set of instructions for bots and web crawlers. Up until this point, it was mostly used to provide useful information for search engines as their bots scraped the web. If website owners want to ask a specific search engine or other bot to not scan their site, they can enter that in their robots.txt file. Bots can always choose to ignore this, but many crawling services respect the request.

This might all sound rather technical, but it's really nothing more than a small text file located in the root folder of your site, like "https://www.example.com/robots.txt." Anyone can see this file on any website. For example, here's The New York Times' robots.txt, which currently blocks both ChatGPT and Bard. 

If you run your own website, you should have some way to access the file structure of that site, either through your hosting provider's web portal or FTP. You may need to comb through your provider's documentation for help figuring out how to access this folder. In most cases, your site will already have a robots.txt created, even if it's blank, but if you do need to create a file, you can do so with any plain text editor. Google has guidance for doing so here.

EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information.

What to Include In Your Robots.txt to Block ChatGPT and Google Bard

With all that out of the way, here's what to include in your site's robots.txt file if you do not want ChatGPT and Google to use the contents of your site to train their generative AI models. If you want to cover the entirety of your site, add these lines to your robots.txt file:

ChatGPT

User-agent: GPTBot

Disallow: /

Google Bard

User-agent: Google-Extended

Disallow: /

You can also narrow this down to block access to only certain folders on your site. For example, maybe you don't mind if most of the data on your site is used for training, but you have a blog that you use as a journal. You can opt out specific folders. For example, if the blog is located at yoursite.com/blog, you'd use this:

ChatGPT

User-agent: GPTBot

Disallow: /blog

Google Bard

User-agent: Google-Extended

Disallow: /blog

As mentioned above, we at EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information; we want the information we’re providing to spread far and wide and to be represented in the outputs and answers provided by LLMs. Of course, individual website owners have different views for their blogs, portfolios, or whatever else you use your website for. We're in favor of means for people to express their preferences, and it would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, announce that they'd respect similar requests.

Think Twice Before Giving Surveillance for the Holidays

7 décembre 2023 à 15:22

With the holidays upon us, it's easy to default to giving the tech gifts that retailers tend to push on us this time of year: smart speakers, video doorbells, bluetooth trackers, fitness trackers, and other connected gadgets are all very popular gifts. But before you give one, think twice about what you're opting that person into.

A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair).

One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for.

For example, a smart speaker might seem like a fun stocking stuffer. But unless the giftee is tapped deeply into tech news, they likely don't know there's a chance for human review of any recordings. They also may not be aware that some of these speakers collect an enormous amount of data about how you use it, typically for advertising–though any connected device might have surprising uses to law enforcement, too.

There's also the problem of tech companies getting acquired like we've seen recently with Tile, iRobot, or Fitbit. The new business can suddenly change the dynamic of the privacy and security agreements that the user made with the old business when they started using one of those products.

And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids’ smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. 

What To Do Instead

While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories. This way, instead of just buying any old smart-device at random because it's on sale, you at least have the context of what sort of data it might collect, how the company has behaved in the past, and what sorts of potential dangers to consider. U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches.

Finally, when shopping it's worth keeping in mind two last details. First, some “smart” devices can be used without their corresponding apps, which should be viewed as a benefit, because we've seen before that app-only gadgets can be bricked by a shift in company policies. Also, remember that not everything needs to be “smart” in the first place; often these features add little to the usability of the product.

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen.

If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible. Take a few minutes after they've unboxed the item and walk through the set up process with them. Some options to look for: 

  • Enable two-factor authentication when available to help secure their new account.
  • If there are any social sharing settings—particularly popular with fitness trackers and game consoles—disable any unintended sharing that might end up on a public profile.
  • Look for any options to enable automatic updates. This is usually enabled by default these days, but it's always good to double-check.
  • If there's an app associated with the new device (and there often is), help them choose which permissions to allow, and which to deny. Keep an eye out for location data, in particular, especially if there's no logical reason for the app to need it. 
  • While you're at it, help them with other settings on their phone, and make sure to disable the phone’s advertising ID.
  • Speaking of advertising IDs, some devices have their own advertising settings, usually located somewhere like, Settings > Privacy > Ad Preferences. If there's an option to disable any ad tracking, take advantage of it. While you're in the settings, you may find other device-specific privacy or data usage settings. Take that opportunity to opt out of any tracking and collection when you can. This will be very device-dependent, but it's especially worth doing on anything you know tracks loads of data, like smart TVs
  • If you're helping set up a video or audio device, like a smart speaker or robot vacuum, poke around in the options to see if you can disable any sort of "human review" of recordings.

If during the setup process, you notice some gaps in their security hygiene, it might also be a great opportunity to help them set up other security measures, like setting up a password manager

Giving the gift of electronics shouldn’t come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

What to Do If You're Concerned About the 23andMe Breach

20 octobre 2023 à 12:53

In early October, a bad actor claimed they were selling account details from the genetic testing service, 23andMe, which included alleged data of one million users of Ashkenazi Jewish descent and another 100,000 users of Chinese descent. By mid-October this expanded out to another four million more general accounts. The data includes display name, birth year, sex, and some details about genetic ancestry results, but no genetic data. There's nothing you can do if your data was already accessed, but it's a good time to reconsider how you're using the service to begin with. 

What Happened

In a blog post, 23andMe claims the bad actors accessed the accounts through "credential stuffing:" the practice of using one set of leaked usernames and passwords from a previous data breach on another website in hopes that people have reused passwords. 

Details about any specific accounts affected are still scant, but we do know some broad strokes. TechCrunch found the data may have been first leaked back in August when a bad actor posted on a hacking forum that they'd accessed 300 terabytes of stolen 23andMe user data. At the time, not much was made of the supposed breach, but then in early October a bad actor posted a data sample on a different forum claiming that the full set of data contained 1 million data points about people with Ashkenazi Jewish ancestry. In a statement to The Washington Post a 23andMe representative noted that this "would include people with even 1% Jewish ancestry." Soon after, another post claimed they had data on 100,000 Chinese users. Then, on October 18, yet another dataset showed up on the same forum that included four million users, with the poster claiming it included data from "the wealthiest people living in the U.S. and Western Europe on this list." 

23andMe suggests that the bad actors compiled the data from accounts using the optional "DNA Relatives" feature, which allows 23andMe users to automatically share data with others on the platform who they may be relatives with. 

Basically, it appears an attacker took username and password combinations from previous breaches and tried those combinations to see if they worked on 23andMe accounts. When logins worked, they scraped all the information they could, including all the shared data about relatives if both the relatives and the original account opted into the DNA Relatives feature.

That's all we know right now. 23andMe says it will continue updating its blog post here with new information as it has it.

Why It Matters

Genetic information is an important tool in testing for disease markers and researching family history, but there are no federal laws that clearly protect users of online genetic testing sites like 23andMe and Ancestry.com. The ability to research family history and disease risk shouldn’t carry the risk that our data will be accessible in data breaches, through scraped accounts, by law enforcement, insurers, or in other ways we can't foresee. 

It's still unclear if the data is deliberately targeting the Ashkenazi Jewish population or if it's a tasteless way to draw attention to the data sale, but the fact the data can be used to target ethnic groups is an unsettling use. 23andMe pitches "DNA Relatives" almost like a social network, and a fun way to find a second cousin or two. There are some privacy guardrails on using the feature, like the option to hide your full name, but with a potentially full family tree otherwise available an individual's privacy choices here may not be that protective. 

23andme is generally one of the better actors in this space. They require an individualized warrant for police access to their data, don't allow direct access to all data (unlike GEDmatch and FTDNA), and push back on overbroad warrants. But putting the burden on its customers to use unique passwords and to opt intoinstead of requiringaccount protection features like two-factor authentication is an unfortunate look for a company that handles sensitive data. 

Reusing passwords is a common practice, but instead of blaming its customers, 23andMe should be doing more to make its default protections stronger. Features like requiring two-factor authentication and frequent privacy check-up reminders, like those offered by most social networks these days, could go a long way to help users reconsider and better understand their privacy.

How to Best Protect Your Account

If your data is included in this stolen data set, there's not much you can do to get your data back, nor is there a way to search through it to see if your information is included. But you should log into your 23andMe account to make some changes to your security and privacy settings to protect against any issues in the future:

  • 23andMe is currently requiring all users to change their passwords. When you create your new one, be sure to use a unique password. A password manager can help make this easier. A password manager can also usually tell you if previously used passwords of yours have been found in a breach, but in either case you should create a unique password for different sites.
  • Enable two-factor authentication on your 23andMe account by following the directions here. This makes it so in order to log into your account, you'll need to provide not only your username and password, but also a second factor, in this case a code from an two-factor authentication app like Authy or Google Authenticator.
  • Change your display name in DNA Relatives so it's just your initials, or consider disabling this feature entirely if you don't use it. 

Taking these steps may not protect other unforeseen privacy invasions, but it can at least better protect it from the rest of the potential issues we know exist today.

How to Download and Delete Your Data

If this situation makes you uneasy with your data being on the platform, or you've already gotten out of it what you wanted, then you may want to delete your account. But before you do so, consider downloading the data for your own records. To download your data:

  1. Log into your 23andMe account and click your username, then "Settings." 
  2. Scroll down to the bottom where it says "23andMe Data" and click "View."
  3. Here, you'll find the option to download various parts of your 23andMe data. The most important ones to consider are:
    1. The "Reports Summary" includes details like the "Wellness Reports," "Ancestry Reports," and "Traits Reports."
    2. The "Ancestry Composition Raw Data" the company's interpretation of your raw genetic data.
    3. If you were using the DNA Relatives feature, the "Family Tree Data" includes all the information about your relatives. Based on the descriptions of the data we've seen, this sounds like the data the bad actors collected.
    4. You can also download the "Raw data," which is the uninterpreted version of your DNA. 

There are other types of data you can download on this page, though much of it will not be of use to you without special software. But there's no harm in downloading everything.

Once you have that data downloaded, follow the company's guide for deleting your account. The button to start the process is located on the bottom of the same account page where you downloaded data.

Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get genetically determined diseases. This incident is an example of why this matters, and how certain features that may seem useful in the moment can be weaponized in novel ways. For more information about genetic privacy, see our Genetic Information Privacy legal overview, and other Health Privacy-related topics on our blog.

How To Turn Off Google’s “Privacy Sandbox” Ad Tracking—and Why You Should

28 septembre 2023 à 13:42

Google has rolled out "Privacy Sandbox," a Chrome feature first announced back in 2019 that, among other things, exchanges third-party cookies—the most common form of tracking technology—for what the company is now calling "Topics." Topics is a response to pushback against Google’s proposed Federated Learning of Cohorts (FLoC), which we called "a terrible idea" because it gave Google even more control over advertising in its browser while not truly protecting user privacy. While there have been some changes to how this works since 2019, Topics is still tracking your internet use for Google’s behavioral advertising.

If you use Chrome, you can disable this feature through a series of three confusing settings.

With the version of the Chrome browser released in September 2023, Google tracks your web browsing history and generates a list of advertising "topics" based on the web sites you visit. This works as you might expect. At launch there are almost 500 advertising categories—like "Student Loans & College Financing," "Parenting," or "Undergarments"—that you get dumped into based on whatever you're reading about online. A site that supports Privacy Sandbox will ask Chrome what sorts of things you're supposedly into, and then display an ad accordingly. 

The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Google calls this “enhanced ad privacy,” perhaps leaning into the idea that starting in 2024 they plan to “phase out” the third-party cookies that many advertisers currently use to track people. But the company will still gobble up your browsing habits to serve you ads, preserving its bottom line in a world where competition on privacy is pushing it to phase out third-party cookies. 

Google plans to test Privacy Sandbox throughout 2024. Which means that for the next year or so, third-party cookies will continue to collect and share your data in Chrome.

The new Topics improves somewhat over the 2019 FLoC. It does not use the FLoC ID, a number that many worried would be used to fingerprint you. The ad-targeting topics are all public on GitHub, hopefully avoiding any clearly sensitive categories such as race, religion, or sexual orientation. Chrome's ad privacy controls, which we detail below, allow you to see what sorts of interest categories Chrome puts you in, and remove any topics you don't want to see ads for. There's also a simple means to opt out, which FLoC never really had during testing.

Other browsers, like Firefox and Safari, baked in privacy protections from third-party cookies in 2019 and 2020, respectively. Neither of those browsers has anything like Privacy Sandbox, which makes them better options if you'd prefer more privacy. 

Google referring to any of this as “privacy” is deceiving. Even if it's better than third-party cookies, the Privacy Sandbox is still tracking, it's just done by one company instead of dozens. Instead of waffling between different tracking methods, even with mild improvements, we should work towards a world without behavioral ads.

But if you're sticking to Chrome, you can at least turn these features off.

How to Disable Privacy Sandbox

Screenshot of Chrome browser with "enhanced ad privacy in Chrome" page Depending on when you last updated Chrome, you may have already received a pop-up asking you to agree to “Enhanced ad privacy in Chrome.” If you just clicked the big blue button that said “Got it” to make the pop-up go away, you opted yourself in. But you can still get back to the opt out page easily enough by clicking the Three-dot icon (⋮) > Settings > Privacy & Security > Ad Privacy page. Here you'll find this screen with three different settings:

  • Ad topics: This is the fundamental component of Privacy Sandbox that generates a list of your interests based on the websites you visit. If you leave this enabled, you'll eventually get a list of all your interests, which are used for ads, as well as the ability to block individual topics. The topics roll over every four weeks (up from weekly in the FLOCs proposal) and random ones will be thrown in for good measure. You can disable this entirely by setting the toggle to "Off."
  • Site-suggested ads: This confusingly named toggle is what allows advertisers to do what’s called "remarketing" or "retargeting," also known as “after I buy a sofa, every website on the internet advertises that same sofa to me.” With this feature, site one gives information to your Chrome instance (like “this person loves sofas”) and site two, which runs ads, can interact with Chrome such that a sofa ad will be shown, even without site two learning that you love sofas. Disable this by setting the toggle to "Off."
  • Ad measurement: This allows advertisers to track ad performance by storing data in your browser that's then shared with other sites. For example, if you see an ad for a pair of shoes, the site would get information about the time of day, whether the ad was clicked, and where it was displayed. Disable this by setting the toggle to "Off."

If you're on Chrome, Firefox, Edge, or Opera, you should also take your privacy protections a step further with our own Privacy Badger, a browser extension that blocks third-party trackers that use cookies, fingerprinting, and other sneaky methods. On Chrome, Privacy Badger also disables the Topics API by default.

❌
❌