Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements

Par : Hannah Zhao
24 janvier 2025 à 19:03

Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.

In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.

As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health. As the refrain goes, whatever the need, there's an app for that. This special nature of digital devices requires courts to review warrants to search digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.

In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.

Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal constraints and a limit to the categories of data to be searched. Neither limitationsor any other limitationswere in the issued search warrant. The police should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.

As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.

Face Scans to Estimate Our Age: Harmful and Creepy AF

23 janvier 2025 à 18:56

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

The Impact of Age Verification Measures Goes Beyond Porn Sites

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Texas Is Enforcing Its State Data Privacy Law. So Should Other States.

22 janvier 2025 à 17:31

States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data without telling customers.

In its complaint, the attorney general’s office alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting unfair and deceptive practices.)

On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.

That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as we have raised repeatedly. Everyone should know when that information's being collected and where it's going.

More state regulators should follow suit and use the privacy laws on their books.

The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.

This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”

In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law. For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all companies respect our privacy.

Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured the violation. It just didn’t.

According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.

Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.

The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step

22 janvier 2025 à 16:30

The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites.

The FTC complaint zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that deserves to be stopped.

For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to consumer reporting agencies, which gather and sell consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to disable the collection of location data and driving information.

These are all important, solid steps, and these sorts of rules should apply to all carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects, and it is not always possible to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.

All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for.

Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data

Par : Lena Cohen
17 janvier 2025 à 10:59

If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced its latest policy changes. These changes, seemingly designed to appease the incoming Trump administration, included loosening Meta’s hate speech policy to allow for the targeting of LGBTQ+ people and immigrants. 

If these changes—or Meta’s long history of anti-competitive, censorial, and invasive practices—make you want to cut ties with the company, it’s sadly not as simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.

Meta’s Business Model Relies on Your Personal Data

You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue, demonstrating just how valuable your personal data is to its business. 

How Meta Harvests Your Personal Data

Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data. A 2022 investigation by The Markup found that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel. 

Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including their contact details and browsing history.

Take These Steps to Limit How Meta Profits From Your Personal Data

Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data. 

Update Your Meta Account Settings

Open your Instagram or Facebook app and navigate to the Accounts Center page. 

A screenshot of the Meta Accounts Center page.

If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for both platforms.

Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.

Once you’re on the “Accounts Center” page, make the following changes:

1) Stop Meta from targeting ads based on data it collects about you on other apps and websites: 

Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad settings depending on your location). Click the Activity information from ad partners option, then Review Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when prompted.

A screenshot of the "Activity information from ad partners" setting with the "No" option selected

2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the process.

Back on the Ad preferences page, click the Manage info tab again (called Ad settings depending on your location), then select the Ads shown outside of Meta setting, select Not allowed and then click the “X” button to close the pop-up.

Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.

A screenshot of the "Ads outside Meta" setting with the "Not allowed" option selected

3) Disconnect the data that other companies share with Meta about you from your account:

From the Accounts Center screen, click the Your information and permissions option, followed by Your activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity button. Note: This may take up to 48 hours to take effect.

Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.

A screenshot of the "Manage future activity" setting with the "Disconnect future activity" option selected

While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for other purposes. 

Install Privacy Badger to Block Meta’s Trackers

Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.

Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the only mobile browser that supports Privacy Badger. 

Limit Meta’s Tracking on Your Phone

Take these additional steps on your mobile device:

  • Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
  • Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.

The Real Solution: Strong Privacy Legislation

Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information. 

Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit.

Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton

Par : Jason Kelley
13 janvier 2025 à 16:02

The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.  

The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.  

The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.

1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.  

Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why: 

While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.  

First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.  

Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.

Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway. 

Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.  

Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.  

2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare. 

Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.  

There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy. 

3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.  

More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography. 

It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media. 

4. The Supreme Court Has Rightly Struck Down Similar Laws Before.  

In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web. 

Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

5. There is No Safe, Privacy Protecting Age-Verification Technology. 

The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act. 

* * *

 

The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law 

Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights. 

For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review. 

 

EFF Goes to Court to Uncover Police Surveillance Tech in California

Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.

EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming the information is a trade secret. We are going to court to make sure the public gets access to these records.

The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.

How did this case start?

As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.

The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.

Pen-Link then filed a lawsuit to permanently block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.

“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.

Unfortunately, these kinds of terms are not new.

EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.

Who is Pen-Link and Cobwebs Technologies?

Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.

The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount of location data sourced from large data sets. Tangles uses AI to glean intelligence from the “open, deep and dark web.” Journalists at multiple news outlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the West Virginia Fusion Center. EFF classifies this type of product as a “Third Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.

What’s next?

Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.

For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.

Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass surveillance to local police.

The case number is STK-CV-UWM-0016425. Read more here: 

EFF's Motion to Intervene
EFF's Points and Authorities
Trujillo Declaration & EFF's Cross-Petition
Pen-Link's Original Complaint
Redacted documents produced by County of San Joaquin Sheriff’s Office

Online Behavioral Ads Fuel the Surveillance Industry—Here’s How

Par : Lena Cohen
6 janvier 2025 à 11:41

A global spy tool exposed the locations of billions of people to anyone willing to pay. A Catholic group bought location data about gay dating app users in an effort to out gay priests. A location data broker sold lists of people who attended political protests

What do these privacy violations have in common? They share a source of data that’s shockingly pervasive and unregulated: the technology powering nearly every ad you see online. 

Each time you see a targeted ad, your personal information is exposed to thousands of advertisers and data brokers through a process called “real-time bidding” (RTB). This process does more than deliver ads—it fuels government surveillance, poses national security risks, and gives data brokers easy access to your online activity. RTB might be the most privacy-invasive surveillance system that you’ve never heard of.

What is Real-Time Bidding?

RTB is the process used to select the targeted ads shown to you on nearly every website and app you visit. The ads you see are the winners of milliseconds-long auctions that expose your personal information to thousands of companies a day. Here’s how it works:

  1. The moment you visit a website or app with ad space, it asks a company that runs ad auctions to determine which ads it will display for you. This involves sending information about you and the content you’re viewing to the ad auction company.
  2. The ad auction company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers. 
  3. The bid request may contain personal information like your unique advertising ID, location, IP address, device details, interests, and demographic information. The information in bid requests is called “bidstream data” and can easily be linked to real people. 
  4. Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on ad space. 
  5. Advertisers, and their ad buying platforms, can store the personal data in the bid request regardless of whether or not they bid on ad space. 

A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive the data. Indeed, anyone posing as an ad buyer can access a stream of sensitive data about the billions of individuals using websites or apps with targeted ads. That’s a big way that RTB puts personal data into the hands of data brokers, who sell it to basically anyone willing to pay. Although some ad auction companies have policies against selling bidstream data, the practice remains widespread

RTB doesn’t just allow companies to harvest your data—it also incentivizes it. Bid requests containing more personal data attract higher bids, so websites and apps are financially motivated to harvest as much of your data as possible. RTB further incentivizes data brokers to track your online activity because advertisers purchase data from data brokers to inform their bidding decisions.

Data brokers don’t need any direct relationship with the apps and websites they’re collecting bidstream data from. While some data collection methods require web or app developers to install code from a data broker, RTB is facilitated by ad companies that are already plugged into most websites and apps. This allows data brokers to collect data at a staggering scale. Hundreds of billions of RTB bid requests are broadcast every day. For each of those bids, thousands of real or fake ad buying platforms may receive data. As a result, entire businesses have emerged to harvest and sell data from online advertising auctions.

First FTC Action Against Abuse of Real-Time Bidding Data

A recent enforcement action by the Federal Trade Commission (FTC) shows that the dangers of RTB are not hypothetical—data brokers actively rely on RTB to collect and sell sensitive information. The FTC found that data broker Mobilewalla was collecting personal data—including precise location information—from RTB auctions without placing ads. 

Mobilewalla collected data on over a billion people, with an estimated 60% sourced directly from RTB auctions. The company then sold this data for a range of invasive purposes, including tracking union organizers, tracking people at Black Lives Matter protests, and compiling home addresses of healthcare employees for recruitment by competing employers. It also categorized people into custom groups for advertisers, such as “pregnant women,” “Hispanic churchgoers,” and “members of the LGBTQ+ community.”

The FTC concluded that Mobilewalla's practice of collecting personal data from RTB auctions where they didn’t place ads violated the FTC Act’s prohibition of unfair conduct. The FTC’s proposed settlement order bans Mobilewalla from collecting consumer data from RTB auctions for any purposes other than participating in those auctions. This action marks the first time the FTC has targeted the abuse of bidstream data. While we celebrate this significant milestone, the dangers of RTB go far beyond one data broker. 

Real-Time Bidding Enables Mass Surveillance 

RTB is regularly exploited for government surveillance. As early as 2017, researchers demonstrated that $1,000 worth of ad targeting data could be used to track an individuals’ locations and glean sensitive information like their religion and sexual orientation. Since then, data brokers have been caught selling bidstream data to government intelligence agencies. For example, the data broker Near Intelligence collected data about more than a billion devices from RTB auctions and sold it to the U.S. Defense Department. Mobilewalla sold bidstream data to another data broker, Gravy Analytics, whose subsidiary, Venntell, likewise has sold location data to the FBI, ICE, CBP, and other government agencies

In addition to buying raw bidstream data, governments buy surveillance tools that rely on the same advertising auctions. The surveillance company Rayzone posed as an advertiser to acquire bidstream data, which it repurposed into tracking tools sold to governments around the world. Rayzone’s tools could identify phones that had been in specific locations and link them to people's names, addresses, and browsing histories. Patternz, another surveillance tool built on bidstream data, was advertised to security agencies worldwide as a way to track people's locations. The CEO of Patternz highlighted the connection between surveillance and advertising technology when he suggested his company could track people through “virtually any app that has ads.”

Beyond the privacy harms from RTB-fueled government surveillance, RTB also creates national security risks. Researchers have warned that RTB could allow foreign states and non-state actors to obtain compromising personal data about American defense personnel and political leaders. In fact, Google’s ad auctions sent sensitive data to a Russian ad company for months after it was sanctioned by the U.S. Treasury. 

The privacy and security dangers of RTB are inherent to its design, and not just a matter of misuse by individual data brokers. The process broadcasts torrents of our personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. This indiscriminate sharing of location data and other personal information is dangerous, regardless of whether the recipients are advertisers or surveillance companies in disguise. Sharing sensitive data with advertisers enables exploitative advertising, such as predatory loan companies targeting people in financial distress. RTB is a surveillance system at its core, presenting corporations and governments with limitless opportunities to use our data against us.

How You Can Protect Yourself

Privacy-invasive ad auctions occur on nearly every website and app, but there are steps you can take to protect yourself:

  • For apps: Follow EFF’s instructions to disable your mobile advertising ID and audit app permissions. These steps will reduce the personal data available to the RTB process and make it harder for data brokers to create detailed profiles about you.
  • For websites: Install Privacy Badger, a free browser extension built by EFF to block online trackers. Privacy Badger automatically blocks tracking-enabled advertisements, preventing the RTB process from beginning.

These measures will help protect your privacy, but advertisers are constantly finding new ways to collect and exploit your data. This is just one more reason why individuals shouldn’t bear the sole responsibility of defending their data every time they use the internet.

The Real Solution: Ban Online Behavioral Advertising

The best way to prevent online ads from fueling surveillance is to ban online behavioral advertising. This would end the practice of targeting ads based on your online activity, removing the primary incentive for companies to track and share your personal data. It would also prevent your personal data from being broadcast to data brokers through RTB auctions. Ads could still be targeted contextually—based on the content of the page you’re currently viewing—without collecting or exposing sensitive information about you. This shift would not only protect individual privacy but also reduce the power of the surveillance industry. Seeing an ad shouldn’t mean surrendering your data to thousands of companies you’ve never heard of. It’s time to end online behavioral advertising and the mass surveillance it enables.

State Legislatures Are The Frontline for Tech Policy: 2024 in Review

State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.

This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.

Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.

EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.

States Continue to Experiment

Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.

For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.

We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.

On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.

Looking Ahead to 2025

2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms. 

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Cars (and Drivers): 2024 in Review

28 décembre 2024 à 15:18

If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras, GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.

Cars Sure Are Sharing a Lot of Information

While we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was sharing information about driver’s habits with insurance companies without consent.

It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let alone opt out of data sharing.

Which is why we were happy to see Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.

Advocating for Better Bills to Protect Abuse Survivors

The amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.

This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.

But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later" framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.

It’s Not Just the Car Itself

Because of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.

This year we fought against A.B. 3138 in California, which proposed adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important issue that we’ll fight for.

We wrote about a bulletin released by the U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates.

Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy problems—and put together an FAQ to help you decide if you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their privacy and security, and their potential use for government-mandated age verification on the internet.

The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and increasingly necessary, for cars, too.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Global Age Verification Measures: 2024 in Review

27 décembre 2024 à 13:29

EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.

Kids Experiencing Harm is Not Just an Online Phenomena

In November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.

No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.

The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens. 

Limiting Access for Kids Limits Access for Everyone 

Through this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning  sexual content usually hurt marginalized communities and groups that serve them the most. History shows that over-censorship is inevitable.

This year, Canada also introduced an age verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online won’t help women or young people. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not. 

Without Comprehensive Privacy Protections, These Bills Exacerbate Data Surveillance 

Under mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID

Online age verification is not like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed. Without requiring all parties who may have access to the data to delete that data, such as third-party intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or processing sensitive documents like drivers’ licenses. 

Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website. 

These proposals are coming to the U.S. as well. We analyzed various age verification methods in comments to the New York Attorney General. None of them are both accurate and privacy-protective. 

The Scramble to Find an Effective Age Verification Method Shows There Isn't One

The European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up with criteria for effective age verification. In parallel, the Commission has asked for proposals for a 'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk mitigation obligations under the DSA becomes clearer.

Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these shortcomings, and to explore less invasive approaches to protecting all people from online harms

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Growing Intersection of Reproductive Rights and Digital Rights: 2024 in Review

Par : Daly Barnett
27 décembre 2024 à 13:24

Dear reader of our blog, surely by now you know the format: as we approach the end of the year, we look back on our work, count our wins, learn from our misses, and lay the groundwork strategies for a better future. It's been an intense year in the fight for reproductive rights and its intersections with digital civil liberties. Going after cops illegally sharing location data, fighting the data broker industry, and building coalitions with the broader movement for reproductive justice—we've stayed busy.

The Fight Against Warrantless Access to Real-Time Location Tracking

The location data market is an unregulated nightmare industry that poses an existential threat to everyone's privacy, but especially those embroiled in the fight for reproductive rights. In a recent blog post, we wrote about the particular dangers posed by LocateX, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. cops shouldn't be able to buy their way around having to get a warrant for real-time location tracking of anyone they please, regardless of the context. In regressive states that ban abortion, however, the problems with LocateX illustrate just how severe the issue can be for such a large population of people.

Building Coalition Within Digital Civil Liberties and Reproductive Justice

Part of our work in this movement is recognizing our lane: providing digital security tips, promoting the rights to privacy and free expression, and making connections with community leaders to support and elevate their work. This year we hosted a livestream panel featuring various next-level thinkers and reproductive justice movement leaders. Make sure to watch it if you missed it! Recognizing and highlighting our shared struggles, interests, and avenues for liberation is exactly how movements are fought for and won.

The Struggle to Stop Cops from Illegally Sharing ALPR data

It's been a multi-year battle to stop law enforcement agencies from illegally sharing out-of-state ALPR (automatic license plate reader) data. Thankfully this year we were able to celebrate a win: a grand jury in Sacramento made the motion to investigate two police agencies who have been illegally sharing this type of data. We're glad to declare victory, but those two agencies are far from the only problem. We hope this sets a precedent that cops aren't above the law and will continue to fight for just that. This win will help us build momentum to continue this fight into the coming year.

Sharing What We Know About Digital Surveillance Risks

We'll be the first to tell you that expertise in digital surveillance threats always begins with We've learned a lot in the few years we've been researching the privacy and security risks facing this issue space, much of it gathered from conversation and trainings with on-the-ground movement workers. We gathered what we've learned from that work and distilled it into an accessible format for anyone that needs it. Behind the scenes, this research continues to inform the hands-on digital security trainings we provide to activists and movement workers.

As we proceed into an uncertain future where abortion access will continue to be a difficult struggle, we'll continue to do what we do best: standing vigilant for peoples' right to privacy, fighting bad Internet laws, protecting free speech online, and building coalition with others. Thank you for your support.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Australia Banning Kids from Social Media Does More Harm Than Good

18 décembre 2024 à 12:42

Age verification systems are surveillance systems that threaten everyone’s privacy and anonymity. But Australia’s government recently decided to ignore these dangers, passing a vague, sweeping piece of age verification legislation after giving only a day for comments. The Online Safety Amendment (Social Media Minimum Age) Act 2024, which bans children under the age of 16 from using social media, will force platforms to take undefined “reasonable steps” to verify users’ ages and prevent young people from using them, or face over $30 million in fines. 

The country’s Prime Minister, Anthony Albanese, claims that the legislation is needed to protect young people in the country from the supposed harmful effects of social media, despite no study showing such an impact. This legislation will be a net loss for both young people and adults who rely on the internet to find community and themselves.

The law does not specify which social media platforms will be banned. Instead, this decision is left to Australia’s communications minister who will work alongside the country’s internet regulator, the eSafety Commissioner, to enforce the rules. This gives government officials dangerous power to target services they do not like, all at a cost to both minor and adult internet users.

The legislation also does not specify what type of age verification technology will be necessary to implement the restrictions but prohibits using only government IDs for this purpose. This is a flawed attempt to protect privacy.

Since platforms will have to provide other means to verify their users' ages other than by government ID, they will likely rely on unreliable tools like biometric scanners. The Australian government awarded the contract for testing age verification technology to a UK-based company, Age Check Certification Scheme (ACCS) who, according to the company website, “can test all kinds of age verification systems,” including “biometrics, database lookups, and artificial intelligence-based solutions.” 

The ban will not take effect for at least another 12 months while these points are decided upon, but we are already concerned that the systems required to comply with this law will burden all Australians’ privacy, anonymity, and data security.

Banning social media and introducing mandatory age verification checks is the wrong approach to protecting young people online, and this bill was hastily pushed through the Parliament of Australia with little oversight or scrutiny. We urge politicians in other countries—like the U.S. and France—to explore less invasive approaches to protecting all people from online harms and focus on comprehensive privacy protections, rather than mandatory age verification.

Saving the Internet in Europe: How EFF Works in Europe

16 décembre 2024 à 11:32

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online

Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users. 

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform. 

The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

 The bill doesn’t even require that the impact be a negative one. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Location Tracking Tools Endanger Abortion Access. Lawmakers Must Act Now.

Par : Lisa Femia
4 décembre 2024 à 17:06

EFF wrote recently about Locate X, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. Developed and sold by the data surveillance company Babel Street, Locate X collects smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices. The tool features a navigable map with red dots, each representing an individual device. Users can then follow the location of specific devices as they move about the map.

Locate X–and other similar services–are able to do this by taking advantage of our largely unregulated location data market.

Unfettered location tracking puts us all at risk. Law enforcement agencies can purchase their way around warrant requirements and bad actors can pay for services that make it easier to engage in stalking and harassment. Location tracking tools particularly threaten groups especially vulnerable to targeting, such as immigrants, the LGBTQ+ community, and even U.S. intelligence personnel abroad. Crucially, in a post-Dobbs United States, location surveillance also poses a serious danger to abortion-seekers across the country.

EFF has warned before about how the location data market threatens reproductive rights. The recent reports on Locate X illustrate even more starkly how the collection and sale of location data endangers patients in states with abortion bans and restrictions.

In late October, 404 Media reported that privacy advocates from Atlas Privacy, a data removal company, were able to get their hands on Locate X and use it to track an individual device’s location data as it traveled across state lines to visit an abortion clinic. Although the tool was designed for law enforcement, the advocates gained access by simply asserting that they planned to work with law enforcement in the future. They were then able to use the tool to track an individual device as it traveled from an apparent residence in Alabama, where there is a complete abortion ban, to a reproductive health clinic in Florida, where abortion is banned after 6 weeks of pregnancy. 

Following this report, we published a guide to help people shield themselves from tracking tools like Locate X. While we urge everyone to take appropriate technical precautions for their situation, it’s far past time to address the issue at its source. The onus shouldn’t be on individuals to protect themselves from such invasive surveillance. Tools like Locate X only exist because U.S. lawmakers have failed to enact legislation that would protect our location data from being bought and sold to the highest bidder. 

Thankfully, there’s still time to reshape the system, and there are a number of laws legislators could pass today to help protect us from mass location surveillance. Remember: when our location information is for sale, so is our safety. 

Blame Data Brokers and the Online Advertising Industry

There are a vast array of apps available for your smartphone that request access to your location. Sharing this information, however, may allow your location data to be harvested and sold to shadowy companies known as data brokers. Apps request access to device location to provide various features, but once access has been granted, apps can mishandle that information and are free to share and sell your whereabouts to third parties, including data brokers. These companies collect data showing the precise movements of hundreds of millions of people without their knowledge or meaningful consent. They then make this data available to anyone willing to pay, whether that’s a private company like Babel Street (and anyone they in turn sell to) or government agencies, such as law enforcement, the military, or ICE.

This puts everyone at risk. Our location data reveals far more than most people realize, including where we live and work, who we spend time with, where we worship, whether we’ve attended protests or political gatherings, and when and where we seek medical care—including reproductive healthcare.

Without massive troves of commercially available location data, invasive tools like Locate X would not exist.

For years, EFF has warned about the risk of law enforcement or bad actors using commercially available location data to track and punish abortion seekers. Multiple data brokers have specifically targeted and sold location information tied to reproductive healthcare clinics. The data broker SafeGraph, for example, classified Planned Parenthood as a “brand” that could be tracked, allowing investigators at Motherboard to purchase data for over 600 Planned Parenthood facilities across the U.S.

Meanwhile, the data broker Near sold the location data of abortion-seekers to anti-abortion groups, enabling them to send targeted anti-abortion ads to people who visited clinics. And location data firm Placer.ai even once offered heat maps showing where visitors to Planned Parenthood clinics approximately lived. Sale to private actors is disturbing given that several states have introduced and passed abortion “bounty hunter” laws, which allow private citizens to enforce abortion restrictions by suing abortion-seekers for cash.

Government officials in abortion-restrictive states are also targeting location information (and other personal data) about people who visit abortion clinics. In Idaho, for example, law enforcement used cell phone data to charge a mother and son with kidnapping for aiding an abortion-seeker who traveled across state lines to receive care. While police can obtain this data by gathering evidence and requesting a warrant based on probable cause, the data broker industry allows them to bypass legal requirements and buy this information en masse, regardless of whether there’s evidence of a crime.

Lawmakers Can Fix This

So far, Congress and many states have failed to enact legislation that would meaningfully rein in the data broker industry and protect our location information. Locate X is simply the end result of such an unregulated data ecosystem. But it doesn’t have to be this way. There are a number of laws that Congress and state legislators could pass right now that would help protect us from location tracking tools.

1. Limit What Corporations Can Do With Our Data

A key place to start? Stronger consumer privacy protections. EFF has consistently pushed for legislation that would limit the ability of companies to harvest and monetize our data. If we enforce strict rules on how location data is collected, shared, and sold, we can stop it from ending up in the hands of private surveillance companies and law enforcement without our consent.

We urge legislators to consider comprehensive, across-the-board data privacy laws. Companies should be required to minimize the collection and processing of location data to only what is strictly necessary to offer the service the user requested (see, for example, the recently-passed Maryland Online Data Privacy Act). Companies should also be prohibited from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.

We also support reproductive health-specific data privacy laws, like Rep. Sara Jacobs’ proposed “My Body My Data” Act. Laws like this would create important protections for a variety of reproductive health data, even beyond location data. Abortion-specific data privacy laws can provide some protection against the specific problem posed by Locate X. But to fully protect against location tracking tools, we must legally limit processing of all location data and not just data at sensitive locations, such as reproductive healthcare clinics.

While a limited law might provide some help, it would not offer foolproof protection. Imagine this scenario: someone travels from Alabama to New York for abortion care. With a data privacy law that protects only sensitive, reproductive health locations, Alabama police could still track that person’s device on the journey to New York. Upon reaching the clinic in New York, their device would disappear into a sensitive location blackout bubble for a couple of hours, then reappear outside of the bubble where police could resume tracking as the person heads home. In this situation, it would be easy to infer where the person was during those missing two hours, giving Alabama police the lead they need.

The best solution is to minimize all location data, no exceptions.

2. Limit How Law Enforcement Can Get Our Data

Congress and state legislatures should also pass laws limiting law enforcement’s ability to access our location data without proper legal safeguards.

Much of our mobile data, like our location data, is information law enforcement would typically need a court order to access. But thanks to the data broker industry, law enforcement can skip the courts entirely and simply head to the commercial market. The U.S. government has turned this loophole into a way to gather personal data on individuals without a search warrant

Lawmakers must close this loophole—especially if they’re serious about protecting abortion-seekers from hostile law enforcement in abortion-restrictive states. A key way to do this is for Congress to pass the Fourth Amendment is Not For Sale Act, which was originally introduced by Senator Ron Wyden in 2021 and made the important and historic step of passing the U.S. House of Representatives earlier this year. 

Another crucial step is to ban law enforcement from sending “geofence warrants” to corporate holders of location data. Unlike traditional warrants, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area during a designated time period, regardless of whether the device owner has any connection to the crime under investigation.This could include, of course, an abortion clinic. 

Notably, geofence warrants are very popular with law enforcement. Between 2018 and 2020, Google alone received more than 5,700 demands of this type from states that now have anti-abortion and anti-LGBTQ legislation on the books.

Several federal and state courts have already found individual geofence warrants to be unconstitutional and some have even ruled they are “categorically prohibited by the Fourth Amendment.” But instead of waiting for remaining courts to catch up, lawmakers should take action now, pass legislation banning geofence warrants, and protect all of us–abortion-seekers included–from this form of dragnet surveillance.

3. Make Your State a Data Sanctuary

In the wake of the Dobbs decision, many states stepped up to serve as health care sanctuaries for people seeking abortion care that they could not access in their home states. To truly be a safe refuge, these states must also be data sanctuaries. A state that has data about people who sought abortion care must protect that data, and not disclose it to adversaries who would use it to punish them for seeking that healthcare. California has already passed laws to this effect, and more states should follow suit.

What You Can Do Right Now

Even before lawmakers act, there are steps you can take to better shield your location data from tools like Locate X.  As noted above, we published a Locate X-specific guide several weeks ago. There are also additional tips on EFF’s Surveillance Self-Defense site, as well as many other resources available to provide more guidance in protecting your digital privacy. Many general privacy practices also offer strong protection against location tracking. 

But don’t stop there: we urge you to make your voice heard and contact your representatives. While these precautions offer immediate protection, only stronger laws will ensure comprehensive location privacy in the long run.

Amazon and Google Must Keep Their Promises on Project Nimbus

2 décembre 2024 à 14:52

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

❌
❌