Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself

404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that this tool presents are especially grave for those traveling to or from out-of-state reproductive health clinics, places of worship, and the border.

The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law enforcement in the future.

With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy. Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.

First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies that can help restore one’s location privacy. Your security plan should reflect how specifically targeted you are for surveillance. Additional steps, such as researching and mitigating the on-board devices included with your car, or sweeping for physical GPS trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built privacy- or security-focused operating system may provide additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.

Disable Mobile Advertising Identifiers

Services like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs). Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a cohesive picture of the movement of a device through space and time.

As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As we described in a recent post, you can do this on Android following these steps:

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can use the older version of Android’s privacy controls to reset it and ask apps not to track you.

And on iOS:

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Audit Your Apps’ Trackers and Permissions

In general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible paths hackers might have to compromise your device.

We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts, offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get to these pages:

On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager

On iPhone: Open Settings > Privacy & Security.

If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax

-site:www.example.com

.

Disable Real-Time Tracking with Airplane Mode

To prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.

Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps. Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap your profile picture, then “Offline Maps,” and “Download New Map.”

Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored locally.

Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.

Separate Devices for Separate Purposes

Finally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.

A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on your normal device to kill time sends your location to its servers every 10 seconds.

One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.

Further Reading

Our own Surveillance Self-Defense site, as well as many other resources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips are applicable in protecting your location data from being divulged, as well.

The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target audiences. Earlier this year, the FTC showed some appetite to pursue enforcement action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast array of data brokers and advertising services profiting off of surveilling us all.

Only a federal privacy law with a strong private right of action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!

Celebrating the Life of Aaron Swartz: Aaron Swartz Day 2024

Aaron Swartz was a digital rights champion who believed deeply in keeping the internet open. His life was cut short in 2013, after federal prosecutors charged him under the Computer Fraud and Abuse Act (CFAA) for systematically downloading academic journal articles from the online database JSTOR. Facing the prospect of a long and unjust sentence, Aaron died by suicide at the age of 26. EFF was proud to call Aaron a friend and ally.

Today, November 8, would have been his 38th birthday.  On November 9, the organizers of Aaron Swartz Day are celebrating his life with a guest-packed podcast featuring those carrying on the work around issues close to his heart. Hosts Lisa Rein and Andre Vinicus Leal Sobral will speak to: 

  • Ryan Shapiro, co-founder of the national security  transparency non-profit Property of the People
  • Nathan Dyer of SecureDrop, Newsroom Support Engineer for the Freedom of the Press Foundation.
  • Tracey Jaquith, Founding Coder and TV Architect at the Internet Archive
  • Tracy Rosenberg, co-founder of the Aaron Swartz Day Police Surveillance Project and Oakland Privacy
  • Brewster Kahle founder of the Internet Archive
  • Ryan Sternlicht, VR developer, educator, researcher, advisor, and maker
  • Grant Smith Ellis, Chairperson of the Board, MassCann and Legal Intern at the Parabola Center
  • Michael “Mek” Karpeles, Open Library, Internet Archive

The podcast will start at 2 p.m. PT/10 p.m. UTC. Please read the official page of the Aaron Swartz Day and International Hackathon for full details.

If you're a programmer or developer engaged in cutting-edge exploration of technology, please check out EFF's Coders' Rights Project.

EFF to Second Circuit: Electronic Device Searches at the Border Require a Warrant

EFF, along with ACLU and the New York Civil Liberties Union, filed an amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Kamaldoss, involves the criminal prosecution of a man whose cell phone and laptop were forensically searched after he landed at JFK airport in New York City. While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. In part based on evidence obtained during the forensic device searches, Mr. Kamaldoss was subsequently charged with prescription drug trafficking.

The district court upheld the forensic searches of his devices because the government had reasonable suspicion that the defendant “was engaged in efforts to illegally import scheduled drugs from abroad, an offense directly tied to at least one of the historic rationales for the border exception—the disruption of efforts to import contraband.”

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.

The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data. Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.

Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.

If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

In our brief, we also highlighted three other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Smith (2023), which we wrote about last year; U.S. v. Sultanov (2024), and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well, in the hope that the Second Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

EFF to Court: Reject X’s Effort to Revive a Speech-Chilling Lawsuit Against a Nonprofit

This post was co-written by EFF legal intern Gowri Nayar.

X’s lawsuit against the nonprofit Center for Countering Digital Hate is intended to stifle criticism and punish the organization for its reports criticizing the platform’s content moderation practices, and a previous ruling dismissing the lawsuit should be affirmed, EFF and multiple organizations argued in a brief filed this fall. 

X sued the Center for Countering Digital Hate (“CCDH”) in federal court in 2023 in response to its reports, which concluded that X’s practices have facilitated an environment of hate speech and misinformation online. Although X’s suit alleges, among other things, breach of contract and violation of the Computer Fraud and Abuse Act, the case is really about X trying to hold CCDH liable for the public controversy surrounding its moderation practices. At bottom, X is claiming that CCDH damaged the platform by critically reporting on it.

CCDH sought to throw out the case on the merits and under California’s anti-SLAPP statute. The California law allows lawsuits to be dismissed if they are filed in retaliation for someone exercising their free speech rights, known as Strategic Lawsuits Against Public Participation, or SLAPPs. In March, the district court ruled in favor of CCDH, dismissed the case, and found that the lawsuit was a SLAPP.

As the district judge noted, X’s suit “is about punishing the Defendants for their speech.” It was correct to reject X’s contract and CFAA theories and saw them for what they were: grievances with CCDH’s criticisms masquerading as legal claims.

X appealed the ruling to the U.S. Court of Appeals for the Ninth Circuit earlier this year. In September, EFF, along with the ACLU, ACLU of Northern California, and the Knight First Amendment Institute at Columbia University, filed an amicus brief in support of CCDH.        

The amicus brief argues that the Ninth Circuit should not allow X to make use of state contract law and a federal anti-hacking statute to stifle CCDH’s speech. Through this lawsuit, X wants to punish CCDH for publishing reports that highlighted how X’s policies and practices are allowing misinformation and hate speech to thrive on its platform. We also argue against the enforcement of X’s anti-scraping provisions because of how vital scraping is to modern journalism and research.

Lastly, we called on the court to dismiss X’s interpretation of the CFAA because it relied on a legal theory that has already been rejected by courts—including the Ninth Circuit itself—in earlier cases. Allowing the CFAA to be used to criminalize all instances of unauthorized access would run counter to prior decisions and would render illegal large categories of activities such as sharing passwords with friends and family.

Ruling in favor of X in this lawsuit would set a very dangerous precedent for free speech rights and allow powerful platforms to exert undue control over information online. We hope the Ninth Circuit affirms the lower court decision and dismisses this meritless lawsuit.

The 2024 U.S. Election is Over. EFF is Ready for What's Next.

The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high. 

No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running. 

During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally.  We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.

Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data. 

EFF is at its best when the stakes are high. 

But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech. 

Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid: 

  • Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.  
  • Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
  • Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress. 
  • Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions. 

We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user. 

We’re not merely fighting against individual companies or elected officials or even specific administrations.  We are fighting for you. That won’t stop no matter who’s in office. 

DONATE TODAY

AI in Criminal Justice Is the Trend Attorneys Need to Know About

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.

Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led to wrongful arrests and jail time through false positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies developing and selling this equipment and software. 

As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative Researcher Beryl Lipton wrote.

The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF format.

EFF Lawsuit Discloses Documents Detailing Government’s Social Media Surveillance of Immigrants

Despite rebranding a federal program that surveils the social media activities of immigrants and foreign visitors to a more benign name, the government agreed to spend more than $100 million to continue monitoring people’s online activities, records disclosed to EFF show.

Thousands of pages of government procurement records and related correspondence show that the Department of Homeland Security and its component Immigrations and Customs Enforcement largely continued an effort, originally called extreme vetting, to try to determine whether immigrants posed any threat by monitoring their social media and internet presence. The only real change appeared to be rebranding the program to be known as the Visa Lifecycle Vetting Initiative.

The government disclosed the records to EFF after we filed suit in 2022 to learn what had become of a program proposed by President Donald Trump. The program continued under President Joseph Biden. Regardless of the name used, DHS’s program raises significant free expression and First Amendment concerns because it chills the speech of those seeking to enter the United States and allows officials to target and punish them for expressing views they don’t like.

Yet that appears to be a major purpose of the program, the released documents show. For example, the terms of the contracting request specify that the government sought a system that could:

analyze and apply techniques to exploit publicly available information, such as media, blogs, public hearings, conferences, academic websites, social media websites such as Twitter, Facebook, and Linkedln, radio, television, press, geospatial sources, internet sites, and specialized publications with intent to extract pertinent information regarding individuals.

That document and another one make explicit that one purpose of the surveillance and analysis is to identify “derogatory information” about Visa applicants and other visitors. The vague phrase is broad enough to potentially capture any online expression that is critical of the U.S. government or its actions.

EFF has called on DHS to abandon its online social media surveillance program because it threatens to unfairly label individuals as a threat or otherwise discriminate against them on the basis of their speech. This could include denying people access to the United States for speaking their mind online. It’s also why EFF has supported a legal challenge to a State Department practice requiring people applying for a Visa to register their social media accounts with the government.

The documents released in EFF’s lawsuit also include a telling passage about the controversial program and the government’s efforts to sanitize it. In an email discussing the lawsuit against the State Department’s social media moniker collection program, an ICE official describes the government’s need to rebrand the program, “from what ICE originally referred to as the Extreme Vetting Initiative.”

The official wrote:

On or around July 2017 at an industry day event, ICE sought input from the private sector on the use of artificial intelligence to assist in visa applicant vetting. In the months that followed there was significant pushback from a variety channels, including Congress. As a result, on or around May 2018, ICE modified its strategy and rebranded the concept as the Visa Lifecycle Vetting Project.

Other documents detail the specifics of the contract and bidding process that resulted in DHS awarding $101,155,431.20 to SRA International, Inc., a government contractor that uses a different name after merging with another contractor. The company is owned by General Dynamics.

The documents also detail an unsuccessful effort by a competitor to overturn DHS’s decision to award the contract to SRA, though much of the content of that dispute is redacted.

All of the documents released to EFF are available on DocumentCloud.

Judge’s Investigation Into Patent Troll Results In Criminal Referrals

In 2022, three companies with strange names and no clear business purpose beyond  patent litigation filed dozens of lawsuits in Delaware federal court, accusing businesses of all sizes of patent infringement. Some of these complaints claimed patent rights over basic aspects of modern life; one, for example, involved a  patent that pertains to the process of clocking in to work through an app.

These companies–named Mellaconic IP, Backertop Licensing, and Nimitz Technologies–seemed to be typical examples of “patent trolls,” companies whose primary business is suing others over patents or demanding licensing fees rather than providing actual products or services. 

However, the cases soon took an unusual turn. The Delaware federal judge overseeing the cases, U.S. District Judge Colm Connolly, sought more information about the patents and their ownership. One of the alleged owners was a food-truck operator who had been promised “passive income,” but was entitled to only a small portion of any revenue generated from the lawsuits. Another owner was the spouse of an attorney at IP Edge, the patent-assertion company linked to all three LLCs. 

Following an extensive investigation, the judge determined that attorneys associated with these shell companies had violated legal ethics rules. He pointed out that the attorneys may have misled Hau Bui, the food-truck owner, about his potential liability in the case. Judge Connolly wrote: 

[T]he disparity in legal sophistication between Mr. Bui and the IP Edge and Mavexar actors who dealt with him underscore that counsel's failures to comply with the Model Rules of Professional Conduct while representing Mr. Bui and his LLC in the Mellaconic cases are not merely technical or academic.

Judge Connolly also concluded that IP Edge, the patent-assertion company behind hundreds of patent lawsuits and linked to the three LLCs, was the “de facto owner” of the patents asserted in his court, but that it attempted to hide its involvement. He wrote, “IP Edge, however, has gone to great lengths to hide the ‘we’ from the world,” with "we" referring to IP Edge. Connolly further noted, “IP Edge arranged for the patents to be assigned to LLCs it formed under the names of relatively unsophisticated individuals recruited by [IP Edge office manager] Linh Deitz.” 

The judge referred three IP Edge attorneys to the Supreme Court of Texas’ Unauthorized Practice of Law Committee for engaging in “unauthorized practices of law in Texas.” Judge Connolly also sent a letter to the Department of Justice, suggesting an investigation into “individuals associated with IP Edge LLC and its affiliate Maxevar LLC.” 

Patent Trolls Tried To Shut Down This Investigation

The attorneys involved in this wild patent trolling scheme challenged Judge Connolly’s authority to proceed with his investigation. However, because transparency in federal courts is essential and applicable to all parties, including patent assertion entities, EFF and two other patent reform groups filed a brief in support of the judge’s investigation. The brief argued that “[t]he public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts.” Companies targeted by the patent trolls, as well as the Chamber of Commerce, filed their own briefs supporting the investigation. 

The appeals court sided with us, upholding Judge Connolly’s authority to proceed, which led to the referral of the involved attorneys to the disciplinary counsel of their respective bar associations. 

After this damning ruling, one of the patent troll companies and its alleged owner made a final effort at appealing this outcome. In July of this year, the U.S. Court of Appeals for the Federal Circuit ruled that investigating Backertop Licensing LLC and ordering its alleged owner to testify was “an appropriate means to investigate potential misconduct involving Backertop.” 

In EFF’s view, these types of investigations into the murky world of patent trolling are not only appropriate but should happen more often. Now that the appeals court has ruled, let’s take a look at what we learned about the patent trolls in this case. 

Patent Troll Entities Linked To French Government

One of the patent trolling entities, Nimitz Technologies LLC, asserted a single patent, U.S. Patent No. 7,848,328, against 11 companies. When the judge required Nimitz’s supposed owner, a man named Mark Hall, to testify in court, Hall could not describe anything about the patent or explain how Nimitz acquired it. He didn’t even know the name of the patent (“Broadcast Content Encapsulation”). When asked what technology was covered by the patent, he said, “I haven’t reviewed it enough to know,” and when asked how he paid for the patent, Hall replied, “no money exchanged hands.” 

The exchange between Hall and Judge Connolly went as follows: 

Q. So how do you come to own something if you never paid for it with money?

A. I wouldn't be able to explain it very well. That would be a better question for Mavexar.

Q. Well, you're the owner?

A. Correct.

Q. How do you know you're the owner if you didn't pay anything for the patent?

A. Because I have the paperwork that says I'm the owner.

(Nov. 27, 2023 Opinion, pages 8-9.) 

The Nimitz patent originated from the Finnish cell phone company Nokia, which later assigned it and several other patents to France Brevets, a French sovereign investment fund, in 2013. France Brevets, in turn, assigned the patent to a US company called Burley Licensing LLC, an entity linked to IP Edge, in 2021. Hau Bui (the food truck owner) signed on behalf of Burley, and Didier Patry, then the CEO of France Brevets, signed on behalf of the French fund. 

France Brevets was an investment fund formed in 2009 with €100 million in seed money from the French government to manage intellectual property. France Brevets was set to receive 35% of any revenue related to “monetizing and enforcement” of the patent, with Burley agreeing to file at least one patent infringement lawsuit within a year, and collect a “total minimum Gross Revenue of US $100,000” within 24 months, or the patent rights would be given back to France Brevets. 

Burley Licensing LLC, run by IP Edge personnel, then created Nimitz Technologies LLC— a company with no assets except for the single patent. They obtained a mailing address for it from a Staples in Frisco, Texas, and assigned the patent to the LLC in August 2021, while the obligations to France Brevets remained unchanged until the fund shut down in 2022.

The Bigger Picture

It’s troubling that patent lawsuits are often funded by entities with no genuine interest in innovation, such as private equity firms. However, it’s even more concerning when foreign government-backed organizations like France Brevets manipulate the US patent system for profit. In this case, a Finnish company sold its patents to a French government fund, which used US-based IP lawyers to file baseless lawsuits against American companies, including well-known establishments like Reddit and Bloomberg, as well as smaller ones like Tastemade and Skillshare.

Judges should enforce rules requiring transparency about third-party funding in patent lawsuits. When ownership is unclear, it’s appropriate to insist that the real owners show up and testify—before dragging dozens of companies into court over dubious software patents. 

Related documents: 

  • Memorandum and Order referring counsel to disciplinary bodies (Nov. 23, 2023) 
  • Federal Circuit Opinion affirming the order requiring Lori LaPray to appear “for testimony regarding potential fraud on the court,” as well as the District Court’s order of monetary sanction against Ms. LaPray for subsequently failing to appear

The Human Toll of ALPR Errors

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

"Is My Phone Listening To Me?"

The short answer is no, probably not! But, with EFF’s new site, Digital Rights Bytes, we go in-depth on this question—and many others.

Whether you’re just starting to question some of the effects of technology in your life or you’re the designated tech wizard of your family looking for resources to share, Digital Rights Bytes is here to help answer some common questions that may be bugging you about the devices you use.  

We often hear the question, “Is my phone listening to me?” Generally, the answer is no, but the reason you may think that your phone is listening to you is actually quite complicated. Data brokers and advertisers have some sneaky tactics at their disposal to serve you ads that feel creepy in the moment and may make you think that your device is secretly taking notes on everything you say. 

Watch the short videofeaturing a cute little penguin discovering how advertisers collect and track their personal dataand share it with your family and friends who have asked similar questions! Curious to learn more? We also have information about how to mitigate this tracking and what EFF is doing to stop these data brokers from collecting your information. 

Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. Got any additional questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes so we can find it!

EFF Launches Digital Rights Bytes to Answer Tech Questions that Bug Us All

New Site Dishes Up Byte-Sized, Yummy, Nutritious Videos and Other Information About Your Online Life

SAN FRANCISCO—The Electronic Frontier Foundation today launched “Digital Rights Bytes,” a new website with short videos offering quick, easily digestible answers to the technology questions that trouble us all. 

“It’s increasingly clear there is no way to separate our digital lives from everything else that we do — the internet is now everybody's hometown. But nobody handed us a map or explained how to navigate safely,” EFF Executive Director Cindy Cohn said. “We hope Digital Rights Bytes will provide easy-to-understand information people can trust, and an entry point for thinking more broadly about digital privacy, freedom of expression, and other civil liberties in our digital world.” 

Initial topics on Digital Rights Bytes include “Is my phone listening to me?”, “Why is device repair so costly?”, “Can the government read my text messages?” and others. More topics will be added over time. 

For each topic, the site provides a brief animated video and a concise, layperson’s explanation of how the technology works. It also provides advice and resources for what users can do to protect themselves and take action on important issues. 

EFF is the leading nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology Development. Its mission is to ensure that technology supports freedom, justice and innovation for all people of the world. 

For the Digital Rights Bytes website: https://www.digitalrightsbytes.org/

Contact: 
Jason
Kelley
Activism Director

Sorry, Gas Companies - Parody Isn't Infringement (Even If It Creeps You Out)

Activism comes in many forms. You might hold a rally, write to Congress, or fly a blimp over the NSA. Or you might use a darkly hilarious parody to make your point, like our client Modest Proposals recently did.

Modest Proposals is an activist collective that uses parody and culture jamming to advance environmental justice and other social causes. As part of a campaign shining a spotlight on the environmental damage and human toll caused by the liquefied natural gas (LNG) industry, Modest Proposals invented a company called Repaer. The fake company’s website offers energy companies the opportunity to purchase “life offsets” that balance the human deaths their activities cause by extending the lives of individuals deemed economically valuable. The website also advertises a “Plasma Pals” program that encourages parents to donate their child’s plasma to wealthy recipients. Scroll down on the homepage a bit, and you’ll see the logos for three (real) LNG companies—Repaer’s “Featured Partners.” 

Believe it or not, the companies didn’t like this. (Shocking!) Two of them—TotalEnergies and Equinor—sent our client stern emails threatening legal action if their names and logos weren’t removed from the website. TotalEnergies also sent a demand to the website’s hosting service, Netlify, that got repaer.earth taken offline. That was our cue to get involved.

We sent letters to both companies, explaining what should be obvious: the website was a noncommercial work of activism, unlikely to confuse any reasonable viewer. Trademark law is about protecting consumers; it’s not a tool for businesses to shut down criticism. We also sent a counternotice to Netlify denying TotalEnergies’ allegations and demanding that repaer.earth be restored. 

 We wish this were the first time we’ve had to send letters like that, but EFF regularly helps activists and critics push back on bogus trademark and copyright claims. This incident is also part of a broader and long-standing pattern of the energy industry weaponizing the law to quash dissent by environmental activists. These are just examples EFF has written about. We’ve been fighting these tactics for a long time, both by representing individual activist groups and through supporting legislative efforts like a federal anti-SLAPP bill. 

Frustratingly, Netlify made us go through the full DMCA counternotice process—including a 10-business-day waiting period to have the site restored—even though this was never a DMCA claim. (The DMCA is copyright law, not trademark, and TotalEnergies didn’t even meet the notice requirements that Netlify claims to follow.) Rather than wait around for Netlify to act, Modest Proposals eventually moved the website to a different hosting service. 

Equinor and TotalEnergies, on the other hand, have remained silent. This is a pretty common result when we help push back against bad trademark and copyright claims: the rights owners slink away once they realize their bullying tactics won’t work, without actually admitting they were wrong. We’re glad these companies seem to have backed off regardless, but victims of bogus claims deserve more certainty than this.

The Frightening Stakes of this Halloween’s Net Neutrality Hearing

The future of the open internet is in danger this October 31st, not from ghosts and goblins, but from the broadband companies that control internet access in most of the United States.  
 
These companies would love to use their oligopoly power to charge users and websites additional fees for “premium” internet access, which they can create by artificially throttling some connections and prioritizing others. Thanks to public pressure and a coalition of public interest groups, the Federal Communications Commission (FCC) has forbidden such paid prioritization and throttling, as well as outright blocking of websites. These net neutrality protections ensure that ISPs treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services. 

But the lure of making more money without investing in better service or infrastructure is hard for broadband services like Comcast and AT&T to resist. So the big telecom companies have challenged the FCC’s rules in court—and their case has now made its way to the Sixth Circuit Court of Appeals. 

A similar challenge was soundly rejected by the D.C. Circuit Court of Appeals in 2016. Unfortunately the FCC, led by a new Chair, repealed those hard-won rules in 2017—despite intense resistance from nonprofits, artists, tech companies large and small, libraries, and millions of regular internet users. A few years later, FCC membership changed again, and the new FCC restored net neutrality protections. As everyone expected, Team Telecom ran back to court, leading to this appeal. 

A few things have changed since 2017, however, and none of them good for Team Internet. For one thing, the case is being heard in the Sixth Circuit, which is not bound by the D.C. Circuit’s earlier reasoning, and which has already signaled its sympathy for Team Telecom in a preliminary ruling. 

And, of course, the makeup of the Supreme Court has changed dramatically. Justice Kavanaugh, in particular, dissented from the D.C. Circuit majority when it reviewed the 2015 order—a dissent that clearly influenced the Sixth Circuit’s initial ruling in the case. That influence may well be felt when this case inevitably makes its way to the Supreme Court.   

The central legal questions are: 1) what did Congress mean when it directed the FCC to regulate “telecommunications services” differently from “information services,” and 2) into which category does broadband fall. This matters because the rules that we need to preserve the open internet — such as forbidding discrimination against certain applications — require the FCC to treat access providers like “common carriers,” treatment that can only be applied to telecommunications services. If the FCC has to define broadband as an “information service,” it can impose regulations that “promote competition” (good) but it cannot do much to forbid paid prioritization, throttling or blocking (bad). 

The answers to those questions will likely depend on whether the Sixth Circuit thinks regulation of the internet is a “major question,” meaning whether it is an issue has “vast economic or political significance.” If so, the Supreme Court has said that agencies can only address it if Congress has clearly authorized them to do so.  

The “major questions doctrine” is on the rise thanks to a Supreme Court majority that is deeply skeptical of the so-called administrative state. In the past few years, the majority has used it to reject multiple agency actions, such as the CDC’s temporary moratorium on evictions in areas hard-hit by Covid.  

Equally importantly, the Supreme Court recently changed the rules on whether and how court should defer to plausible agency interpretations of the statutes under which they operate. In the case of Loper Bright Enterprises v. Raimondo, the Court ended an era of judicial deference to agency determinations. Rather than allowing agencies to act according to the agencies’ own plausible determinations about the scope and meaning of the authorities granted to them by Congress, courts are now instructed to reach those determinations independently.  
 
Ironically, under the old rule of deference, in 2003 the Ninth Circuit independently concluded that broadband was a telecommunications service – the most straightforward and correct reading of the statute and the one that provides a sound legal basis for net neutrality protections. In fact, the court said it had been erroneous for the FCC to say otherwise. But the FCC and telecoms successfully argued that the courts should defer to the FCC’s contrary reading, and won at the Supreme Court based on the doctrine of judicial deference that Loper Bright has now overruled. 

Putting these legal threads together, Team Telecom is arguing that the FCC cannot classify current broadband offerings as a telecommunications service, even though that’s the best reading of the statute, because that classification is be a “major question” that only Congress can decide. Team Internet argues that Congress clearly delegated that decision-making power to the FCC, which is one reason the Supreme Court did not treat the issue as a “major question” the last time it looked at the issue. Team Telecom also argues that, after the Loper Bright decision, the court need not defer to the FCC’s interpretation of its own authority. Team Internet explains that, this time, the FCC’s interpretation aligns with the best understanding of the statute and the facts. 
 
EFF stands with Team Internet and so should the court. It will likely issue a decision in the first half of 2025, so the specter of uncertainty will be with us for some time. Even when the panel issues an opinion, the losing side will be able to request that the full Sixth Circuit rehear the case, and then the Supreme Court would be the next and final resting place of the matter. 

 

Triumphs, Trials, and Tangles From California's 2024 Legislative Session

California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this year.

EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.

Celebrating Our Victories

This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.

First, we commend the California Legislature for not advancing A.B. 3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. We understand Asm. Alanis' concerns, but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.

Second, EFF worked with dozens of organizations to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.  Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read more about this bill here, and we are glad to see the California legislature reject this dangerous bill.

EFF also worked to oppose and defeat S.B. 1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076 would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it. 

Lastly, EFF opposed S.B. 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act authored by Senator Scott Wiener (San Francisco). This bill aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.

Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.  

Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:

The Lost Promise of A.B. 3048

Throughout this session, EFF and our partners supported A.B. 3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their privacy rights without creating burdensome requirements for developers or hindering innovation.

Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process. But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy rights.

This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their rights. 

Wrong Solutions to Real Problems

A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included bills intended to address young people’s safety online and deepfakes in elections.

While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online, and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms. 

Similarly, A.B. 2839, authored by Assemblymember Gail Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers, etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.

There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they were signed, a federal judge put A.B. 2839’s enforcement on pause (via a preliminary injunction) on First Amendment grounds.

Privacy Risks in State Databases

We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways, threatened to weaken privacy protections within state databases. Specifically,  A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data privacy.

A.B. 518 authorizes numerous agencies in California to share, without restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this personal information confidential, and no restrictions on what DSS can do with the information.

On the other hand, A.B. 2723 assigns a governing board to the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:

  1.     the right for subjects whose information is kept in the data system to receive notice their data is in the system;
  2.     the right to consent or, more meaningfully, to withhold consent;
  3.     and the right to request correction of erroneous information.

By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first place. 

Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.

A Call for Comprehensive Local Protections

A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be addressed.

Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.

Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.

No Matter What the Bank Says, It's YOUR Money, YOUR Data, and YOUR Choice

The Consumer Finance Protection Bureau (CFPB) has just finalized a rule that makes it easy and safe for you to figure out which bank will give you the best deal and switch to that bank, with just a couple of clicks. 

We love this kind of thing: the coolest thing about a digital world is how easy it is to switch from product or service to another—in theory. Digital tools are so flexible, anyone who wants your business can write a program to import your data into a new service and forward any messages or interactions that show up at the old service.

That's the theory. But in practice, companies have figured out how to use law - IP law, cybersecurity law, contract law, trade secrecy law—to literally criminalize this kind of marvelous digital flexibility, so that it can end up being even harder to switch away from a digital service than it is to hop around among traditional, analog ones.

Companies love lock-in. The harder it is to quit a product or service, the worse a company can treat you without risking your business. Economists call the difficulties you face in leaving one service for another the "switching costs" and businesses go to great lengths to raise the switching costs they can impose on you if you have the temerity to be a disloyal customer. 

So long as it's easier to coerce your loyalty than it is to earn it, companies win and their customers lose. That's where the new CFPB rule comes in.

Under this rule, you can authorize a third party - another bank, a comparison shopping site, a broker, or just your bookkeeping software - to request your account data from your bank. The bank has to give the third party all the data you've authorized. This data can include your transaction history and all the data needed to set up your payees and recurring transactions somewhere else.

That means that—for example—you can authorize a comparison shopping site to access some of your bank details, like how much you pay in overdraft fees and service charges, how much you earn in interest, and what your loans and credit cards are costing you. The service can use this data to figure out which bank will cost you the least and pay you the most. 

Then, once you've opened an account with your new best bank, you can direct it to request all your data from your old bank, and with a few clicks, get fully set up in your new financial home. All your payees transfer over, all your regular payments, all the transaction history you'll rely on at tax time. "Painless" is an admittedly weird adjective to apply to household finances, but this comes pretty darned close.

Americans lose a lot of money to banking fees and low interest rates. How much? Well, CFPB economists, using a very conservative methodology, estimate that this rule will make the American public at least $677 million better off, every year.

Now, that $677 million has to come from somewhere, and it does: it comes from the banks that are currently charging sky-high fees and paying rock-bottom interest. The largest of these banks are suing the CFPB in a bid to block the rule from taking effect.

These banks claim that they are doing this to protect us, their depositors, from a torrent of fraud that would be unleashed if we were allowed to give third parties access to our own financial data. Clearly, this is the only reason a giant bank would want to make it harder for us to change to a competitor (it can't possibly have anything to do with the $677 million we stand to save by switching).

We've heard arguments like these before. While EFF takes a back seat to no one when it comes to defending user security (we practically invented this), we reject the idea that user security is improved when corporations lock us in (and leading security experts agree with us).

This is not to say that a bad data-sharing interoperability rule wouldn't be, you know, bad. A rule that lacked the proper safeguards could indeed enable a wave of fraud and identity theft the likes of which we've never seen.

Thankfully, this is a good interoperability rule! We liked it when it was first proposed, and it got even better through the rulemaking process.

First, the CFPB had the wisdom to know that a federal finance agency probably wasn't the best—or only—group of people to design a data-interchange standard. Rather than telling the banks exactly how they should transmit data when requested by their customers, the CFPB instead said, "These are the data you need to share and these are the characteristics of a good standards body. So long as you use a standard from a good standards body that shares this data, you're in compliance with the rule." This is an approach we've advocated for years, and it's the first time we've seen it in the wild.

The CFPB also instructs the banks to fail safe: any time a bank gets a request to share your data that it thinks might be fraudulent, they have the right to block the process until they can get more information and confirm that everything is on the up-and-up.

The rule also regulates the third parties that can get your data, establishing stringent criteria for which kinds of entities can do this. It also limits how they can use your data (strictly for the purposes you authorize) and what they need to do with the data when that has been completed (delete it forever), and what else they are allowed to do with it (nothing). There's also a mini "click-to-cancel" rule that guarantees that you can instantly revoke any third party's access to your data, for any reason.

The CFPB has had the authority to make a rule like this since its founding in 2010, with the passage of the Consumer Financial Protection Act (CFPA). Back when the CFPA was working its way through Congress, the banks howled that they were being forced to give up "their" data to their competitors.

But it's not their data. It's your data. The decision about who you share it with belongs to you, and you alone.

Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff

A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors, and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.

Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.

You love to see it.

Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores (Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get all their apps from the company's store.

This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can demand all kinds of concessions from you, in order to reach me. The most important concession is money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.

For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case, they only charge five times more than the Visa/Mastercard cartel).

Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last December, they prevailed in their case against Google.

This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to decide what to do about it.

It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The Verge

For the next three years, Google must meet the following criteria:

  • Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
  • Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as they'd install any app;
  • Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
  • Permit app vendors to tell users about other ways to pay for the things they buy in-app;
  • Permit app vendors to set their own prices.

Google is also prohibited from using its cash to fence out rivals, for example, by:

  • Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
  • Offering incentives to app vendors to avoid rival app stores;
  • Offering incentives to hardware makers to pre-install Google Play;
  • Offering incentives to hardware makers not to install rival app stores.

These provisions tie in with Google's other recent  loss; in Google v. DoJ, where the company was found to have operated a monopoly over search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers, browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that anyone who came up with a better search engine would have no hope of finding users.

What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software authors are at its mercy. By making it easy for users to switch from one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not because you're holding 3.3 billion Android users hostage."

Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a well-crafted interoperability order can cut through thorny tech problems. 

Google has vowed to appeal. They say they're being singled out, because Apple won a similar case earlier this year. It's true, a different  court got it wrong with Apple.

But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple responded to the EU by refusing to substantively comply with the law, teeing up another big, hairy battle.

In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game devs who are dolphins caught in Google's tuna net to the free press itself..

Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference

Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of Chiefs of Police (IACP).

IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the trade show floor. 

Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+ attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles, cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.

And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve seen in our research and at IACP.

A partial view of the vendor booths at IACP 2024


Doughnuts provided by police tech vendor Peregrine

“All in On AI” Demands Accountability

Police are pushing forward full speed ahead on AI. 

EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive policing, and gunshot detection. We’ve seen a trend toward the integration of these various data streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started tracking the rise of what we call Third Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative use. 

The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work, including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose agency was among the first to use Axon’s DraftOne, software using genAI to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future. 

The booth for police drone provider, Brinc

The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in some cases, total elimination. AI in policing carries a dire list of risks, including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear guardrails. 

The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle

AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image, what we click on. 

As one vendor employee said to us, “Yeah, it’s scary.” 

Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis, Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources. 

Some Cops Care What the Public Thinks

Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor. 

This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop law enforcement’s full-scale invasion of our private lives.

Cop Tech is Coming To Every Department

The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,” police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.

A Cybertruck at the booth for Skyfire AI

“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else. 

We Need Federal Data Privacy Legislation

There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these new tools, and they’re typically able to navigate around piecemeal state legislation. 

We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore and create and grow without fear of indefinite retention of every character we type, every icon we click. 

Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a dollar off of it.

EU to Apple: “Let Users Choose Their Software”; Apple: “Nah”

This year, a far-reaching, complex new piece of legislation comes into effect in EU: the Digital Markets Act (DMA), which represents some of the most ambitious tech policy in European history. We don’t love everything in the DMA, but some of its provisions are great, because they center the rights of users of technology, and they do that by taking away some of the control platforms exercise over users, and handing that control back to the public who rely on those platforms.

Our favorite parts of the DMA are the interoperability provisions. IP laws in the EU (and the US) have all but killed the longstanding and honorable tradition of adversarial interoperability: that’s when you can alter a service, program or device you use, without permission from the company that made it. Whether that’s getting your car fixed by a third-party mechanic, using third-party ink in your printer, or choosing which apps run on your phone, you should have the final word. If a company wants you to use its official services, it should make the best services, at the best price – not use the law to force you to respect its business-model.

It seems the EU agrees with us, at least on this issue. The DMA includes several provisions that force the giant tech companies that control so much of our online lives (AKA “gatekeeper platforms”) to provide official channels for interoperators. This is a great idea, though, frankly, lawmakers should also restore the right of tinkerers and hackers to reverse-engineer your stuff and let you make it work the way you want.

One of these interop provisions is aimed at app stores for mobile devices. Right now, the only (legal) way to install software on your iPhone is through Apple’s App Store. That’s fine, so long as you trust Apple and you think they’re doing a great job, but pobody’s nerfect, and even if you love Apple, they won’t always get it right – like when they tell you you’re not allowed to have an app that records civilian deaths from US drone strikes, or a game that simulates life in a sweatshop, or a dictionary (because it has swear words!). The final word on which apps you use on your device should be yours.

Which is why the EU ordered Apple to open up iOS devices to rival app stores, something Apple categorically refuses to do. Apple’s “plan” for complying with the DMA is, shall we say, sorely lacking (this is part of a grand tradition of American tech giants wiping their butts with EU laws that protect Europeans from predatory activity, like the years Facebook spent ignoring European privacy laws, manufacturing stupid legal theories to defend the indefensible).

Apple’s plan for opening the App Store is effectively impossible for any competitor to use, but this goes double for anyone hoping to offer free and open source software to iOS users. Without free software – operating systems like GNU/Linux, website tools like WordPress, programming languages like Rust and Python, and so on – the internet would grind to a halt.

Our dear friends at Free Software Foundation Europe (FSFE) have filed an important brief with the European Commission, formally objecting to Apple’s ridiculous plan on the grounds that it effectively bars iOS users from choosing free software for their devices.

FSFE’s brief makes a series of legal arguments, rebutting Apple’s self-serving theories about what the DMA really means. FSFE shoots down Apple’s tired argument that copyrights and patents override any interoperability requirements. U.S. courts have been inconsistent on this issue, but we’re hopeful that the Court of Justice of the E.U. will reject the “intellectual property trump card.” Even more importantly, FSFE makes moral and technical arguments about the importance of safeguarding the technological self-determination of users by letting them choose free software, and about why this is as safe – or safer – than giving Apple a veto over its customers’ software choices.

Apple claims that because you might choose bad software, you shouldn’t be able to choose software, period. They say that if competing app stores are allowed to exist, users won’t be safe or private. We disagree – and so do some of the most respected security experts in the world.

It’s true that Apple can use its power wisely to ensure that you only choose good software. But it’s also used that power to attack its users, like in China, where Apple blocked all working privacy tools from iPhones and then neutered a tool used to organize pro-democracy protests.

It’s not just in China, either. Apple has blanketed the world with billboards celebrating its commitment to its users’ privacy, and they made good on that promise, blocking third-party surveillance (to the $10 billion dollar chagrin of Facebook). But right in the middle of all that, Apple also started secretly spying on iOS users to fuel its own surveillance advertising network, and then lied about it.

Pobody’s nerfect. If you trust Apple with your privacy and security, that’s great. But for people who don’t trust Apple to have the final word – for people who value software freedom, or privacy (from Apple), or democracy (in China), users should have the final say.

We’re so pleased to see the EU making tech policy we can get behind – and we’re grateful to our friends at FSFE for holding Apple’s feet to the fire when they flout that law.

The Real Monsters of Street Level Surveillance

Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts. To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org

If your blood runs too cold, take a break with our favorite digital rights legends— the Encryptids.

The Face Stealer

 "The Face Stealer" text over illustration of a spider-like monster

Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.

The Real Monster

Face recognition technology (FRT) might not jump out at you, but the impacts of this monster are all too real. EFF wants to banish this monster with a full ban on government use, and prohibit companies from feeding on this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.

Three-eyed Beast

"The Three-eyed Beast" text over illustration of a rectangular face with a large camera as a snout, pinned to a shirt with a badge.

Freeze! In your weakest moment, you may  encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of delight turn you into a spectacle—sometimes calling in foes like the Face Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you. 

The Real Monster

Body-worn cameras are marketed as a fix for police transparency, but instead our communities get another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it. Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more harm than good.

Shrapnel Wraith

"The Shrapnel Wraith" text over illustration of a mechanical vulture dropping gears and bolts

If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through your skin. Danger is circling just overhead.

The Real Monster

Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard.  Worse, some can be armed with explosives and other weapons making them a potentially lethal threat.  Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any weapons.

Doorstep Creep

"The Doorstep Creep" text over illustration of a cloaked figure in front of a door, holding a staff topped with a camera

Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks  at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away,  slanderous whispers echo in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.

The Real Monster

Community Surveillance Apps come in many forms, encouraging the adoption of more home security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community support and mutual aid. 

Foggy Gremlin

"The Foggy Fremlin" text over illustration of a little monster with sharp teeth and a long tail, rising a GPS location pin.

Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become narrow, dark, and sinister.

The Real Monster

Real-time location tracking is a chilling mechanism that enables law enforcement to monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the Fourth Amendment is Not For Sale Act to protect individuals from such tracking.

Street Level Surveillance

Fight the monsters in your community

Disability Rights Are Technology Rights

At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological self-determination is important for every technology user, and it’s especially important for users with disabilities.

Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the design process. 

The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how things work, they’ll never work right.

So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start, not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.

Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to ensuring that their gadgets, code and systems can’t be modified by their users.

Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.

But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists who added more accessibility features to EME.

The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe effects, and dampens them if there are. 

It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole right to do so. No one should have that power.

It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the Federal Trade Commission has signaled that it will crack down on companies that use DRM to block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

(EFF extends our thanks to Liz Henry for their help with this article.)

❌