Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Police Use of Face Recognition Continues to Wrack Up Real-World Harms

15 janvier 2025 à 11:22

Police have shown, time and time again, that they cannot be trusted with face recognition technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is fully accurate or not, is too dangerous for police use,  and such use ought to be banned.

Now, The Washington Post has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the most likely match spit out by the computer without first doing their own investigation.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami, respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent match found by the computer, they in fact were not the correct match.

This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the hearing.

Apparently not all officers got the memo.

We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face recognition. They include Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number of wrongful arrests, instituting strict new guidelines about the use of face recognition technology.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in ways that violate their own departmental policies.


We continue to urge cities, states, and Congress to ban police use of face recognition technology. We stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our movement. 

AI and Policing: 2024 in Review

31 décembre 2024 à 10:02

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Aerial and Drone Surveillance: 2024 in Review

Par : Hannah Zhao
29 décembre 2024 à 05:50

We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.

Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced. 

This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that. 

Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.  

 Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public. 

Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies. 

Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons. 

As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Police Surveillance in San Francisco: 2024 in Review

25 décembre 2024 à 10:33

From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.

Unfortunately, 2024 was a far cry from those victories.

While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.

In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.

Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.  

EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review

24 décembre 2024 à 14:05

EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.

Removing Amazon Ring

We started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms. 

Adding third-party platforms

In 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and 

New Fusus data

404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.

New and updated ALPR data 

EFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data. 

In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer. 

Redoing drone data

This year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource. 

We also added new drone data from Illinois, Minnesota, and Texas

We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed. 

Other cool uses of the Atlas

The Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data. 

The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”

Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision. 


Thanks to our volunteers

The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.

The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself

8 novembre 2024 à 20:13

404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that this tool presents are especially grave for those traveling to or from out-of-state reproductive health clinics, places of worship, and the border.

The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law enforcement in the future.

With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy. Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.

First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies that can help restore one’s location privacy. Your security plan should reflect how specifically targeted you are for surveillance. Additional steps, such as researching and mitigating the on-board devices included with your car, or sweeping for physical GPS trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built privacy- or security-focused operating system may provide additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.

Disable Mobile Advertising Identifiers

Services like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs). Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a cohesive picture of the movement of a device through space and time.

As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As we described in a recent post, you can do this on Android following these steps:

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can use the older version of Android’s privacy controls to reset it and ask apps not to track you.

And on iOS:

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Audit Your Apps’ Trackers and Permissions

In general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible paths hackers might have to compromise your device.

We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts, offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get to these pages:

On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager

On iPhone: Open Settings > Privacy & Security.

If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax

-site:www.example.com

.

Disable Real-Time Tracking with Airplane Mode

To prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.

Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps. Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap your profile picture, then “Offline Maps,” and “Download New Map.”

Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored locally.

Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.

Separate Devices for Separate Purposes

Finally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.

A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on your normal device to kill time sends your location to its servers every 10 seconds.

One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.

Further Reading

Our own Surveillance Self-Defense site, as well as many other resources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips are applicable in protecting your location data from being divulged, as well.

The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target audiences. Earlier this year, the FTC showed some appetite to pursue enforcement action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast array of data brokers and advertising services profiting off of surveilling us all.

Only a federal privacy law with a strong private right of action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!

AI in Criminal Justice Is the Trend Attorneys Need to Know About

Par : Beryl Lipton
5 novembre 2024 à 17:00

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.

Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led to wrongful arrests and jail time through false positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies developing and selling this equipment and software. 

As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative Researcher Beryl Lipton wrote.

The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF format.

The Human Toll of ALPR Errors

1 novembre 2024 à 23:17

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference

Par : Beryl Lipton
28 octobre 2024 à 10:52

Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of Chiefs of Police (IACP).

IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the trade show floor. 

Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+ attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles, cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.

And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve seen in our research and at IACP.

A partial view of the vendor booths at IACP 2024


Doughnuts provided by police tech vendor Peregrine

“All in On AI” Demands Accountability

Police are pushing forward full speed ahead on AI. 

EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive policing, and gunshot detection. We’ve seen a trend toward the integration of these various data streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started tracking the rise of what we call Third Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative use. 

The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work, including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose agency was among the first to use Axon’s DraftOne, software using genAI to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future. 

The booth for police drone provider, Brinc

The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in some cases, total elimination. AI in policing carries a dire list of risks, including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear guardrails. 

The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle

AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image, what we click on. 

As one vendor employee said to us, “Yeah, it’s scary.” 

Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis, Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources. 

Some Cops Care What the Public Thinks

Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor. 

This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop law enforcement’s full-scale invasion of our private lives.

Cop Tech is Coming To Every Department

The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,” police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.

A Cybertruck at the booth for Skyfire AI

“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else. 

We Need Federal Data Privacy Legislation

There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these new tools, and they’re typically able to navigate around piecemeal state legislation. 

We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore and create and grow without fear of indefinite retention of every character we type, every icon we click. 

Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a dollar off of it.

The Real Monsters of Street Level Surveillance

Par : Rory Mir
25 octobre 2024 à 17:37

Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts. To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org

If your blood runs too cold, take a break with our favorite digital rights legends— the Encryptids.

The Face Stealer

 "The Face Stealer" text over illustration of a spider-like monster

Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.

The Real Monster

Face recognition technology (FRT) might not jump out at you, but the impacts of this monster are all too real. EFF wants to banish this monster with a full ban on government use, and prohibit companies from feeding on this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.

Three-eyed Beast

"The Three-eyed Beast" text over illustration of a rectangular face with a large camera as a snout, pinned to a shirt with a badge.

Freeze! In your weakest moment, you may  encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of delight turn you into a spectacle—sometimes calling in foes like the Face Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you. 

The Real Monster

Body-worn cameras are marketed as a fix for police transparency, but instead our communities get another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it. Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more harm than good.

Shrapnel Wraith

"The Shrapnel Wraith" text over illustration of a mechanical vulture dropping gears and bolts

If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through your skin. Danger is circling just overhead.

The Real Monster

Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard.  Worse, some can be armed with explosives and other weapons making them a potentially lethal threat.  Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any weapons.

Doorstep Creep

"The Doorstep Creep" text over illustration of a cloaked figure in front of a door, holding a staff topped with a camera

Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks  at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away,  slanderous whispers echo in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.

The Real Monster

Community Surveillance Apps come in many forms, encouraging the adoption of more home security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community support and mutual aid. 

Foggy Gremlin

"The Foggy Fremlin" text over illustration of a little monster with sharp teeth and a long tail, rising a GPS location pin.

Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become narrow, dark, and sinister.

The Real Monster

Real-time location tracking is a chilling mechanism that enables law enforcement to monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the Fourth Amendment is Not For Sale Act to protect individuals from such tracking.

Street Level Surveillance

Fight the monsters in your community

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

17 octobre 2024 à 16:04

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

17 octobre 2024 à 10:27

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Civil Rights Commission Pans Face Recognition Technology

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

You Really Do Have Some Expectation of Privacy in Public

Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right.

How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system.

But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before.

Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.”  U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.”

This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes.

In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone.  Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so.

Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.”

As Chief Justice John Roberts said in the majority opinion:

“Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . .

As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . .  A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public.

If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy.

It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location.

The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.

EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches

The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.

The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.

The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.

After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.

As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.

The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.

It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.

Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.

The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.

Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.

The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.

Backyard Privacy in the Age of Drones

Par : Hannah Zhao
27 août 2024 à 11:12

This article was originally published by The Legal Aid Society's Decrypting a Defense Newsletter on August 5, 2024 and is reprinted here with permission.

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s: California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

In response to the concerns regarding drone surveillance voiced by civil liberties groups and others, some law enforcement agencies, like the NYPD, have pledged to abide by internal policies to refrain from warrantless use over private property. But without enforcement mechanisms, those empty promises are easily discarded by officials when they consider them inconvenient, as NYC Mayor Eric Adams did in announcing that drones would, in fact, be deployed to indiscriminately spy on backyard parties over Labor Day.

Barring a seismic shift away from Ciraolo and Riley by the U.S. Supreme Court (which seems nigh impossible given the Fourth Amendment approach by the current members of the bench), protection from warrantless aerial surveillance—and successful legal challenges—will have to come from the states. Indeed, six months after Ciraolo was decided, the California Supreme Court held in People v. Cook that under the state’s constitution, an individual had a reasonable expectation that cops will not conduct warrantless surveillance of their backyard from the air. More recently, other states, such as Hawai’i, Vermont, and Alaska, have similarly relied on their state constitution’s Fourth Amendment corollary to find warrantless aerial surveillance improper. Some states have also passed new laws regulating governmental drone use. And at least half a dozen states, including Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have statutes requiring warrants (with exceptions) for police use.

Law enforcement’s use of drones will only proliferate in the coming years, and drone capabilities continue to evolve rapidly. Courts and legislatures must keep pace to ensure that privacy rights do not fall victim to the advancement of technology.

For more information on drones and other surveillance technologies, please visit EFF’s Street Level Surveillance guide at https://sls.eff.org/.

Federal Appeals Court Finds Geofence Warrants Are “Categorically” Unconstitutional

12 août 2024 à 15:26

In a major decision on Friday, the federal Fifth Circuit Court of Appeals held that geofence warrants are “categorically prohibited by the Fourth Amendment.” Closely following arguments EFF has made in a number of cases, the court found that geofence warrants constitute the sort of “general, exploratory rummaging” that the drafters of the Fourth Amendment intended to outlaw. EFF applauds this decision because it is essential that every person feels like they can simply take their cell phone out into the world without the fear that they might end up a criminal suspect because their location data was swept up in open-ended digital dragnet.

The new Fifth Circuit case, United States v. Smith, involved an armed robbery and assault of a US Postal Service worker at a post office in Mississippi in 2018. After several months of investigation, police had no identifiable suspects, so they obtained a geofence warrant covering a large geographic area around the post office for the hour surrounding the crime. Google responded to the warrant with information on several devices, ultimately leading police to the two defendants.

On appeal, the Fifth Circuit reached several important holdings.

First, it determined that under the Supreme Court’s landmark ruling in Carpenter v. United States, individuals have a reasonable expectation of privacy in the location data implicated by geofence warrants. As a result, the court broke from the Fourth Circuit’s deeply flawed decision last month in United States v. Chatrie, noting that although geofence warrants can be more “limited temporally” than the data sought in Carpenter, geofence location data is still highly invasive because it can expose sensitive information about a person’s associations and allow police to “follow” them into private spaces.

Second, the court found that even though investigators seek warrants for geofence location data, these searches are inherently unconstitutional. As the court noted, geofence warrants require a provider, almost always Google, to search “the entirety” of its reserve of location data “while law enforcement officials have no idea who they are looking for, or whether the search will even turn up a result.” Therefore, “the quintessential problem with these warrants is that they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search. That is constitutionally insufficient.”

Unsurprisingly, however, the court found that in 2018, police could have relied on such a warrant in “good faith,” because geofence technology was novel, and police reached out to other agencies with more experience for guidance. This means that the evidence they obtained will not be suppressed in this case.

Nevertheless, it is gratifying to see an appeals court recognize the fundamental invasions of privacy created by these warrants and uphold our constitutional tradition prohibiting general searches. Police around the country have increasingly relied on geofence warrants and other reverse warrants, and this opinion should act as a warning against narrow applications of Fourth Amendment precedent in these cases.

❌
❌