Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

“Guardrails” Won’t Protect Nashville Residents From AI-Enabled Camera Networks

Nashville’s Metropolitan Council is one vote away from passing an ordinance that’s being branded as “guardrails” against the privacy problems that come with giving the police a connected camera system like Axon’s Fusus. But Nashville locals are right to be skeptical of just how much protection from mass surveillance products they can expect.  

"I am against these guardrails," council member Ginny Welsch told the Tennessean recently. "I think they're kind of a farce. I don't think there can be any guardrail when we are giving up our privacy and putting in a surveillance system." 

Likewise, Electronic Frontier Alliance member Lucy Parsons Labs has inveighed against Fusus and the supposed guardrails as a fix to legislators’ and residents’ concerns in a letter to the Metropolitan Council. 

While the ordinance doesn’t name the company specifically, it was introduced in response to privacy concerns over the city’s possible contract for Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. In particular, local opponents are concerned about data-sharing—a critical part of Fusus—that could impede the city’s ability to uphold its values against the criminalization of some residents, like undocumented immigrants and people seeking reproductive or gender-affirming care.

This technology product, which was acquired by the police surveillance giant Axon in 2024, facilitates two major functions for police:

  • With the click of a button—or the tap of an icon on a map—officers can get access to live camera footage from public and private cameras, including the police’s Axon body-worn cameras, that have been integrated into the Fusus network.
  • Data feeds from a variety of surveillance tools—like body-worn cameras, drones, gunshot detection, and the connected camera network—can be aggregated into a system that makes those streams quickly accessible and susceptible to further analysis by features marketed as “artificial intelligence.”

From 2022 through 2023, Metropolitan Nashville Police Department (MNPD) had, unbeknownst to the public, already been using Fusus. When the contract came back under consideration, a public outcry and unanswered questions about the system led to its suspension, and the issue was deferred multiple times before the contract renewal was voted down late last year. Nashville council members determined that the Fusus system posed too great a threat to vulnerable groups that the council has sought to protect with city policies and resolutions, including pregnant residents, immigrants, and residents seeking gender-affirming care, among others. The state has criminalized some of the populations that the city of Nashville has passed ordinances to protect. 

Unfortunately, the fight against the sprawling surveillance of Fusus continues. The city council is now making its final consideration of the aforementionedan ordinance that some of its members say will protect city residents in the event that the mayor and other Fusus fans are able to get a contract signed after all.

These so-called guardrails include:

  • restricting the MNPD from accessing private cameras or installing public safety cameras in locations “where there is a reasonable expectation of privacy”; 
  • prohibiting using face recognition to identify individuals in the connected camera system’s footage; 
  • policies addressing authorized access to and use of the connected camera system, including how officers will be trained, and how they will be disciplined for any violations of the policy;
  • quarterly audits of access to the connected camera system; 
  • mandatory inclusion of a clause in procurement contracts allowing for immediate termination should violations of the ordinance be identified; 
  • mandatory reporting to the mayor and the council about any violations of the ordinance, the policies, or other abuse of access to the camera network within seven days of the discovery. 

Here’s the thing: even if these limited “guardrails” are in place, the only true protection from the improper use of the AI-enabled Fusus system is to not use it at all. 

We’ve seen that when law enforcement has access to cameras, they will use them, even if there are clear regulations prohibiting those uses: 

  • Black residents of a subsidized housing development became the primary surveillance targets for police officers with Fusus access in Toledo, Ohio. 

Firms such as Fusus and its parent company Axon are pushing AI-driven features, and databases with interjurisdictional access. Surveillance technology is bending toward a future where all of our data are being captured, including our movements by street cameras (like those that would be added to Fusus), our driving patterns by ALPR, our living habits by apps, and our actions online by web trackers, and then being combined, sold, and shared.

When Nashville first started its relationship with Fusus in 2022, the company featured only a few products, primarily focused on standardizing video feeds from different camera providers. 

Now, Fusus is aggressively leaning into artificial intelligence, claiming that its “AI on the Edge” feature is built into the initial capture phase and processes as soon as video is taken. Even if the city bans use of face recognition for the connected camera system, the Fusus system boasts that it can detect humans, objects, and combine other characteristics to identify individuals, detect movements, and set notifications based on certain characteristics and behaviors. Marketing material claims that the system comes “pre-loaded with dozens of search and analysis variables and profiles that are ready for action,” including a "robust & growing AI library.” It’s unclear how these AI recognition options are generated or how they are vetted, if at all, or whether they can even be removed as would be required by the ordinance.

The proposed “guardrails” in Nashville are insufficient to address danger posed by mass surveillance systems, and the city of Nashville shouldn’t think they’ve protected their residents, tourists, and other visitors by passing them. Nashville residents and other advocacy groups have already raised concerns.

The only true way to protect Nashville’s residents against dragnet surveillance and overcriminalization is to block access to these invasive technologies altogether. Though this ordinance has passed its second reading, Nashville should not adopt Fusus or any other connected camera system, regardless of whether the ordinance is ultimately adopted. If Councilors care about protecting their constituents, they should hold the line against Fusus. 

Anchorage Police Department: AI-Generated Police Reports Don’t Save Time

The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.

The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report.  The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued.

The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage.

This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports.

In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.

EFF In Conversation With Ron Deibert on Chasing Shadows

Join EFF's Cindy Cohn and Eva Galperin in conversation with Ron Deibert of the University of Toronto’s Citizen Lab, to discuss Ron’s latest book: Chasing Shadows: Cyber Espionage, Subversion and the Global Fight for Democracy. Chasing Shadows provides a front-row seat to a dark underworld of digital espionage, dark PR, and subversion. The book provides a gripping account of how the Citizen Lab, the world’s foremost digital watchdog, has uncovered dozens of cyber espionage cases and protects people in countries around the world. Called “essential reading” by Margaret Atwood, it’s a chilling reminder of the invisible invasions happening on smartphones and computers around the world.

LEARN MORE


When:

Monday, March 10, 2025 
7:oo pm - 9:o0 pm (PT)

Where:

In-person:
City Lights Bookstore
261 Columbus Avenue
San Francisco, CA 94133

Virtual:
Zoom

About the Author:

Ronald J. Deibert is the founder and director of the Citizen Lab, a world-renowned digital security research center at the University of Toronto. The bestselling author of Reset: Reclaiming the Internet for Civil Society and Black Code: Surveillance, Privacy, and the Dark Side of the Internet, he has also written many landmark articles and reports on espionage operations that infiltrated government and NGO computer networks. His team’s exposés of the spyware that attacks journalists and anti-corruption advocates around the world have been featured in The New York Times, The Washington Post, Financial Times, and other media. Deibert has received multiple honors for his cutting-edge work, and in 2022 he was appointed an Officer of the Order of Canada—the country’s second-highest honor of merit.

Anti-Surveillance Mapmaker Refuses Flock Safety's Cease and Desist Demand

Flock Safety loves to crow about the thousands of local law enforcement agencies around the United States that have adopted its avian-themed automated license plate readers (ALPRs). But when a privacy activist launched a website to map out the exact locations of these pole-mounted devices, the company tried to clip his wings.  

The company sent DeFlock.me and its creator Will Freeman a cease-and-desist letter, claiming that the project dilutes its trademark. Suffice it to say, and to lean into ornithological wordplay, the letter is birdcage liner.  

Representing Freeman, EFF sent Flock Safety a letter rejecting the demand, pointing out that the grassroots project is well within its First Amendment rights.  

Flock Safety’s car-tracking cameras have been spreading across the United States like an invasive species, preying on public safety fears and gobbling up massive amounts of sensitive driver data. The technology not only tracks vehicles by their license plates, but also creates “fingerprints” of each vehicle, including the make, model, color and other distinguishing features. This is a mass surveillance technology that collects information on everyone, regardless of whether they are connected to a crime. It has been misused by police to spy on their ex-partners and could be used to target people engaged in First Amendment activities or seeking medical care.  

Through crowdsourcing and open-source research, DeFlock.me aims to “shine a light on the widespread use of ALPR technology, raise awareness about the threats it poses to personal privacy and civil liberties, and empower the public to take action.”  While EFF’s Atlas of Surveillance project has identified more than 1,700 agencies using ALPRs, DeFlock has mapped out more than 16,000 individual camera locations, more than a third of which are Flock Safety devices.  

Flock Safety is so integrated into law enforcement, it’s not uncommon to see law enforcement agencies actually promoting the company by name on their websites. The Sussex County Sheriff’s website in Virginia has only two items in its menu bar: Accident Reports and Flock Safety. The name “DeFlock,” EFF told the vendor, represents the project’s goal of “ending ALPR usage and Flock’s status as one of the most widely used ALPR providers.” It’s accurate, appropriate, effective, and most importantly, legally protected.  

 We wrote:  

Your claims of dilution by blurring and/or tarnishment fail at the threshold, without even needing to address why dilution is unlikely. Federal anti-dilution law includes express carve-outs for any noncommercial use of a mark and for any use in connection with criticizing or commenting on the mark owner or its products. Mr. Freeman’s use of the name “DeFlock” is both.

Flock Safety’s cease and desist later is just the latest in a long list of groups turning to bogus intellectual property claims to silence their critics. Frequently, these have no legal basis and are designed to frighten under-resourced activists and advocacy groups with high-powered law firm letterheads. EFF is here to stand up against these trademark bullies, and in the case of Flock Safety, flip them the bird.  

Police Use of Face Recognition Continues to Wrack Up Real-World Harms

Police have shown, time and time again, that they cannot be trusted with face recognition technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is fully accurate or not, is too dangerous for police use,  and such use ought to be banned.

Now, The Washington Post has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the most likely match spit out by the computer without first doing their own investigation.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami, respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent match found by the computer, they in fact were not the correct match.

This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the hearing.

Apparently not all officers got the memo.

We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face recognition. They include Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number of wrongful arrests, instituting strict new guidelines about the use of face recognition technology.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in ways that violate their own departmental policies.


We continue to urge cities, states, and Congress to ban police use of face recognition technology. We stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our movement. 

AI and Policing: 2024 in Review

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Aerial and Drone Surveillance: 2024 in Review

We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.

Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced. 

This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that. 

Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.  

 Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public. 

Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies. 

Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons. 

As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Police Surveillance in San Francisco: 2024 in Review

From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.

Unfortunately, 2024 was a far cry from those victories.

While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.

In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.

Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.  

EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review

EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.

Removing Amazon Ring

We started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms. 

Adding third-party platforms

In 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and 

New Fusus data

404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.

New and updated ALPR data 

EFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data. 

In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer. 

Redoing drone data

This year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource. 

We also added new drone data from Illinois, Minnesota, and Texas

We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed. 

Other cool uses of the Atlas

The Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data. 

The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”

Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision. 


Thanks to our volunteers

The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.

The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself

404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that this tool presents are especially grave for those traveling to or from out-of-state reproductive health clinics, places of worship, and the border.

The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law enforcement in the future.

With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy. Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.

First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies that can help restore one’s location privacy. Your security plan should reflect how specifically targeted you are for surveillance. Additional steps, such as researching and mitigating the on-board devices included with your car, or sweeping for physical GPS trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built privacy- or security-focused operating system may provide additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.

Disable Mobile Advertising Identifiers

Services like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs). Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a cohesive picture of the movement of a device through space and time.

As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As we described in a recent post, you can do this on Android following these steps:

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can use the older version of Android’s privacy controls to reset it and ask apps not to track you.

And on iOS:

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Audit Your Apps’ Trackers and Permissions

In general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible paths hackers might have to compromise your device.

We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts, offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get to these pages:

On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager

On iPhone: Open Settings > Privacy & Security.

If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax

-site:www.example.com

.

Disable Real-Time Tracking with Airplane Mode

To prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.

Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps. Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap your profile picture, then “Offline Maps,” and “Download New Map.”

Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored locally.

Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.

Separate Devices for Separate Purposes

Finally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.

A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on your normal device to kill time sends your location to its servers every 10 seconds.

One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.

Further Reading

Our own Surveillance Self-Defense site, as well as many other resources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips are applicable in protecting your location data from being divulged, as well.

The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target audiences. Earlier this year, the FTC showed some appetite to pursue enforcement action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast array of data brokers and advertising services profiting off of surveilling us all.

Only a federal privacy law with a strong private right of action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!

AI in Criminal Justice Is the Trend Attorneys Need to Know About

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.

Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led to wrongful arrests and jail time through false positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies developing and selling this equipment and software. 

As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative Researcher Beryl Lipton wrote.

The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF format.

The Human Toll of ALPR Errors

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference

Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of Chiefs of Police (IACP).

IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the trade show floor. 

Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+ attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles, cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.

And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve seen in our research and at IACP.

A partial view of the vendor booths at IACP 2024


Doughnuts provided by police tech vendor Peregrine

“All in On AI” Demands Accountability

Police are pushing forward full speed ahead on AI. 

EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive policing, and gunshot detection. We’ve seen a trend toward the integration of these various data streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started tracking the rise of what we call Third Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative use. 

The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work, including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose agency was among the first to use Axon’s DraftOne, software using genAI to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future. 

The booth for police drone provider, Brinc

The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in some cases, total elimination. AI in policing carries a dire list of risks, including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear guardrails. 

The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle

AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image, what we click on. 

As one vendor employee said to us, “Yeah, it’s scary.” 

Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis, Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources. 

Some Cops Care What the Public Thinks

Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor. 

This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop law enforcement’s full-scale invasion of our private lives.

Cop Tech is Coming To Every Department

The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,” police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.

A Cybertruck at the booth for Skyfire AI

“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else. 

We Need Federal Data Privacy Legislation

There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these new tools, and they’re typically able to navigate around piecemeal state legislation. 

We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore and create and grow without fear of indefinite retention of every character we type, every icon we click. 

Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a dollar off of it.

The Real Monsters of Street Level Surveillance

Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts. To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org

If your blood runs too cold, take a break with our favorite digital rights legends— the Encryptids.

The Face Stealer

 "The Face Stealer" text over illustration of a spider-like monster

Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.

The Real Monster

Face recognition technology (FRT) might not jump out at you, but the impacts of this monster are all too real. EFF wants to banish this monster with a full ban on government use, and prohibit companies from feeding on this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.

Three-eyed Beast

"The Three-eyed Beast" text over illustration of a rectangular face with a large camera as a snout, pinned to a shirt with a badge.

Freeze! In your weakest moment, you may  encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of delight turn you into a spectacle—sometimes calling in foes like the Face Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you. 

The Real Monster

Body-worn cameras are marketed as a fix for police transparency, but instead our communities get another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it. Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more harm than good.

Shrapnel Wraith

"The Shrapnel Wraith" text over illustration of a mechanical vulture dropping gears and bolts

If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through your skin. Danger is circling just overhead.

The Real Monster

Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard.  Worse, some can be armed with explosives and other weapons making them a potentially lethal threat.  Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any weapons.

Doorstep Creep

"The Doorstep Creep" text over illustration of a cloaked figure in front of a door, holding a staff topped with a camera

Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks  at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away,  slanderous whispers echo in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.

The Real Monster

Community Surveillance Apps come in many forms, encouraging the adoption of more home security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community support and mutual aid. 

Foggy Gremlin

"The Foggy Fremlin" text over illustration of a little monster with sharp teeth and a long tail, rising a GPS location pin.

Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become narrow, dark, and sinister.

The Real Monster

Real-time location tracking is a chilling mechanism that enables law enforcement to monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the Fourth Amendment is Not For Sale Act to protect individuals from such tracking.

Street Level Surveillance

Fight the monsters in your community

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Civil Rights Commission Pans Face Recognition Technology

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

❌