Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 mars 2024 à 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests

19 mars 2024 à 18:55

The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.  

The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city. 

According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.” 

Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight. 

Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.” 

You can read the lawsuit here. 

San Diego City Council Breaks TRUST

15 mars 2024 à 14:54

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said: 

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

The Atlas of Surveillance Removes Ring, Adds Third-Party Investigative Platforms

Running the Atlas of Surveillance, our project to map and inventory police surveillance across the United States, means experiencing emotional extremes.

Whenever we announce that we've added new data points to the Atlas, it comes with a great sense of satisfaction. That's because it almost always means that we're hundreds or even thousands of steps closer to achieving what only a few years ago would've seemed impossible: comprehensively documenting the surveillance state through our partnership with students at the University of Nevada, Reno Reynolds School of Journalism.

At the same time, it's depressing as hell. That's because it also reflects how quickly and dangerously the surveillance technology is metastasizing.

We have the exact opposite feeling when we remove items from the Atlas of Surveillance. It's a little sad to see our numbers drop, but at the same time that change in data usually means that a city or county has eliminated a surveillance program.

That brings us to the biggest change in the Atlas since our launch in 2018. This week, we removed 2,530 data points: an entire category of surveillance. With the announcement from Amazon that its home surveillance company Ring will no longer facilitate warrantless requests for consumer video footage, we've decided to sunset that particular dataset.

While law enforcement agencies still maintain accounts on Ring's Neighbors social network, it seems to serve as a communications tool, a function on par with services like Nixle and Citizen, which we currently don't capture in the Atlas. That's not to say law enforcement won't be gathering footage from Ring cameras: they will, through legal process or by directly asking residents to give them access via the Fusus platform. But that type of surveillance doesn't result from merely having a Neighbors account (agencies without accounts can use these methods to obtain footage), which was what our data documented. You can still find out which agencies are maintaining camera registries through the Atlas. 

Ring's decision was a huge victory – and the exact outcome EFF and other civil liberties groups were hoping for. It also has opened up our capacity to track other surveillance technologies growing in use by law enforcement. If we were going to remove a category, we decided we should add one too.

Atlas of Surveillance users will now see a new type of technology: Third-Party Investigative Platforms, or TPIPs. Commons TPIP products include Thomson Reuters CLEAR, LexisNexis Accurint Virtual Crime Center, TransUnion TLOxp, and SoundThinking CrimeTracer (formerly Coplink X from Forensic Logic). These are technologies we've been watching for awhile, but have been struggling to categorize and define. But here's the definition we've come up with:

Third-Party Investigative Platforms are cloud-based software systems that law enforcement agencies subscribe to in order to access, share, mine, and analyze various sources of investigative data. Some of the data the agencies upload themselves, but the systems also provide access to data from other law enforcement, as well as from commercial sources and data brokers. Many products offer AI features, such as pattern identification, face recognition, and predictive analytics. Some agencies employ multiple TPIPs.

We are calling this new category a beta feature in the Atlas, since we are still figuring out how best to research and compile this data nationwide. You'll find fairly comprehensive data on the use of CrimeTracer in Tennessee and Massachusetts, because both states provide the software to local law enforcement agencies throughout the state. Similarly, we've got a large dataset for the use of the Accurint Virtual Crime Center in Colorado, due to a statewide contract. (Big thanks to Prof. Ran Duan's Data Journalism students for working with us to compile those lists!) We've also added more than 60 other agencies around the country, and we expect that dataset to grow as we hone our research methods.

If you've got information on the use of TPIPs in your area, don't hesitate to reach out. You can email us at aos@eff.org, submit a tip through our online form, or file a public records request using the template that EFF and our students have developed to reveal the use of these platforms. 

We Flew a Plane Over San Francisco to Fight Proposition E. Here's Why.

29 février 2024 à 15:19

Proposition E, which San Franciscans will be asked to vote on in the March 5 election, is so dangerous that last weekend we chartered a plane to inform our neighbors about what the ballot measure does and urge them to vote NO on it. If you were in Dolores Park, Golden Gate Park, Chinatown, or anywhere in between on Saturday, there’s a chance you saw it, with a huge banner flying through the sky: “No Surveillance State! No on Prop E.”

Despite the fact that the San Francisco Chronicle has endorsed a NO vote on Prop E, and even quoted some police who don’t find its changes useful to keeping the public safe, proponents of Prop E have raised over $1 million to push this unnecessary, ill-thought out, and downright dangerous ballot measure.

San Francisco, Say NOPE: Vote NO on Prop E on March 5

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

What Does Prop E Do?

Prop E is a haphazard mess of proposals that tries to capitalize on residents’ fear of crime in an attempt to gut commonsense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the civilian-staffed Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Prop E would also amend existing law passed in 2019 to protect San Franciscans from invasive, untested, or biased police surveillance technologies. Currently, if the SFPD wants to acquire a new technology, they must provide a detailed use policy to the democratically-elected Board of Supervisors, in a process that allows for public comment. The Board then votes on whether and how the police can use the technology.

Prop E guts these protective measures designed to bring communities into the conversation about public safety. If Prop E passes on March 5, then the SFPD can unilaterally use any technology they want for a full year without the Board’s approval, without publishing an official policy about how they’d use the technology, and without allowing community members to voice their concerns.

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

Why is Prop E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency, accountability, or democratic control.

San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Prop E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

What Technology Would Prop E Allow Police to Use?

That's the thing—we don't know, and if Prop E passes, we may never know. Today, if the SFPD decides to use a piece of surveillance technology, there is a process for sharing that information with the public. With Prop E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. 

Even though we don't know what technologies the SFPD is eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And according to the City Attorney, Prop E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology. San Francisco currently has a ban on police using remote-controlled robots to deploy deadly force, but if passed, Prop E would allow police to invest in technologies like taser-armed drones without any oversight or potential for elected officials to block the sale. 

Don’t let police experiment on San Franciscans with dangerous, untested surveillance technologies. Say NOPE to a surveillance state. Vote NO on Prop E on March 5.  

What is Proposition E and Why Should San Francisco Voters Oppose It?

2 février 2024 à 18:39

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

San Francisco Police’s Live Surveillance Yields Almost 200 Hours of Spying–Including of Music Festivals

A new report reveals that in just three months, from July 1 to September 30, 2023,  the San Francisco Police Department (SFPD) racked up 193 hours and 19 minutes of live access to non-city surveillance cameras. That means for the equivalent of 8 days, police sat behind a desk and tapped into hundreds of cameras, ostensibly including San Francisco’s extensive semi-private security camera networks, to watch city residents, workers, and visitors live. An article by the San Francisco Chronicle analyzing the report also uncovered that the SFPD tapped into these cameras to watch 42 hours of live footage during the Outside Lands music festival.

The city’s Board of Supervisors granted police permission to get live access to these cameras in September 2022 as part of a 15-month pilot program to see if allowing police to conduct widespread, live surveillance would create more safety for all people. However, even before this legislation’s passage, the SFPD covertly used non-city security cameras to monitor protests and other public events. In fact, police and the rich man who funded large networks of semi-private surveillance cameras both claimed publicly that the police department could easily access historic footage of incidents after the fact to help build cases, but could not peer through the cameras live. This claim was debunked by EFF and other investigators who revealed that police requested live access to semi-private cameras to monitor protests, parades, and public events—despite being the type of activity protected by the First Amendment.

When the Board of Supervisors passed this ordinance, which allowed police live access to non-city cameras for criminal investigations (for up to 24 hours after an incident) and for large-scale events, we warned that police would use this newfound power to put huge swaths of the city under surveillance—and we were unfortunately correct.

The most egregious example from the report is the 42 hours of live surveillance conducted during the Outside Lands music festival, which yielded five arrests for theft, pickpocketing, and resisting arrest—and only one of which resulted in the District Attorney’s office filing charges. Despite proponents’ arguments that live surveillance would promote efficiency in policing, in this case, it resulted in a massive use of police resources with little to show for it.

There still remain many unanswered questions about how the police are using these cameras. As the Chronicle article recognized:

…nearly a year into the experiment, it remains unclear just how effective the strategy of using private cameras is in fighting crime in San Francisco, in part because the Police Department’s disclosures don’t provide information on how live footage was used, how it led to arrests and whether police could have used other methods to make those arrests.

The need for greater transparency—and at minimum, for the police to follow all reporting requirements mandated by the non-city surveillance camera ordinance—is crucial to truly evaluate the impact that access to live surveillance has had on policing. In particular, the SFPD’s data fails to make clear how live surveillance helps police prevent or solve crimes in a way that footage after the fact does not. 

Nonetheless, surveillance proponents tout this report as showing that real-time access to non-city surveillance cameras is effective in fighting crime. Many are using this to push for a measure on the March 5, 2024 ballot, Proposition E, which would roll back police accountability measures and grant even more surveillance powers to the SFPD. In particular, Prop E would allow the SFPD a one-year pilot period to test out any new surveillance technology, without any use policy or oversight by the Board of Supervisors. As we’ve stated before, this initiative is bad all around—for policing, for civil liberties, and for all San Franciscans.

Police in San Francisco still don’t get it. They can continue to heap more time, money, and resources into fighting oversight and amassing all sorts of surveillance technology—but at the end of the day, this still won’t help combat the societal issues the city faces. Technologies touted as being useful in extreme cases will just end up as an oversized tool for policing misdemeanors and petty infractions, and will undoubtedly put already-marginalized communities further under the microscope. Just as it’s time to continue asking questions about what live surveillance helps the SFPD accomplish, it’s also time to oppose the erosion of existing oversight by voting NO on Proposition E on March 5. 

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

25 janvier 2024 à 13:14

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

play
Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

24 janvier 2024 à 14:09

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

The Atlas of Surveillance Hits Major Milestones: 2023 in Review

28 décembre 2023 à 11:24

"The EFF are relentless."

That's what a New York Police Department lieutenant wrote on LinkedIn after someone sent him a link to the Atlas of Surveillance, EFF's moonshot effort to document which U.S. law enforcement agencies are using which technologies, including drones, automated license plate readers and face recognition. Of course, the lieutenant then went on to attack us with unsubstantiated accusations of misinformation — but we take it all as a compliment.

If you haven't checked out the Atlas of Surveillance recently, or ever before, you absolutely should. It includes a searchable database and an interactive map, and anyone can download the data for their own projects. As this collaboration with the University of Nevada Reno's Reynolds School of Journalism (RSJ) finishes its fifth year, we are proud to announce that we've hit a major milestone: more than 12,000 data points that document the use of police surveillance nationwide, all collected using open-source investigative techniques, data journalism, and public records requests.

We’ve come a long way since the Atlas of Surveillance launched as a pilot project with RSJ back in the spring semester of 2019. By that summer, with the help of a few dozen journalism students, we had accumulated 250 data points, focused on the 23 counties along the U.S.-Mexico border. When we launched the formal website in 2020, we had collected a little more than 5,500 data points. Today's dataset represents more than a 100% increase since then.

That isn't the only major milestone we accomplished this year. To collect data for the project, EFF and RSJ designed a tool called Report Back, which allows us to distribute micro-research assignments (about 10-20 minutes each) to students in our classes. This winter, the 3,000th assignment was completed using Report Back.

This year we also dug into one particular technology. As part of our Atlas efforts, we began to see Fusus—a company working to bring real-time surveillance to local police departments via camera registries and real-time crime centers—appear more frequently as a tool used by law enforcement. In collaboration with the Thomson Reuters Foundation, we decided to do a deeper dive into the adoption of Fusus, and the Atlas has served as a resource for other reporters working to investigate this company in their own towns and across the country.

We’re proud to have built the Atlas because it’s meant to be a tool for the public, and we're excited to see more and more people are discovering it. This year, we clocked about 250,000 pageviews, more than double what we've seen in previous years. This tells us not only that more people care about police surveillance than ever before, but that we're better able to inform them about what's happening locally in their communities. The top 20 jurisdictions with the most traffic for include:

  1. Phoenix, Ariz.
  2. Chicago, Ill.
  3. Los Angeles, Calif.
  4. Atlanta, Ga.
  5. New York City, N.Y.
  6. Austin, Texas
  7. Houston, Texas
  8. San Antonio, Texas
  9. Seattle, Wash.
  10. Columbus, Ohio  
  11. Las Vegas, Nev.
  12. Dallas, Texas
  13. Philadelphia, Penn.
  14. Denver, Colo. 
  15. Tampa, Fla.
  16. West Bloomfield, Mich.
  17. Portland, Ore.
  18. San Diego, Calif.
  19. Nashville, Tenn.
  20. Pittsburgh, Penn. 

One of the primary goals of the Atlas of Surveillance project is to reach journalists, academics, activists, and policymakers, so they can use our data to better inform their research. In this sense, 2023 was a huge success. Here are some of our favorite projects that used Atlas of Surveillance data this year:

  • Social justice advocates were trained on how to use the Atlas of Surveillance in a workshop titled "Data Brokers & Modern Surveillance: Dangers for Marginalized People" at an annual Friends (Quakers) conference. 
  • A team of master’s students at the University of Amsterdam built a website called "Beyond the Lens" that analyzes the police surveillance industry using primary data from the Atlas of Surveillance. 
  • The Markup combined Atlas data with census data, crime data, and emails obtained through the California Public Records Act to investigate the Los Angeles Police Department's relationship with Ring, Amazon's home video surveillance subsidiary. 

The Atlas has also been cited in government proceedings and court briefs:

The Atlas made appearances in many academic and legal scholarship publications in 2023, including:

Meanwhile, print, radio, and television journalists continue to turn to the Atlas as a resource, either to build stories about police surveillance or provide context. This year, these have included:

Activists, advocates, and concerned citizens around the nation have also used the Atlas of Surveillance to support their actions against expansion of surveillance:

These victories wouldn't be possible without the students at RSJ, especially our 2023 interns Haley Ekberg, Kieran Dazzo, Dez Peltzer, and Colin Brandes. We also owe thanks to lecturers Paro Pain, Ran Duan, Jim Scripps, and Patrick File for sharing their classrooms with us.

In 2024, EFF will expand the Atlas to capture more technologies used by law enforcement agencies. We are also planning new features, functions and fixes that allow users to better browse and analyze the data.  And of course, you should keep an eye out in the new year for new workshops, talks, and other opportunities to learn more and get involved with the project.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Artificial Intelligence and Policing: Year in Review 2023

23 décembre 2023 à 12:33

Machine learning, artificial intelligence, algorithmic decision making–regardless of what you call it, and there is hot debate over that, this technology has been touted as a supposed threat to humanity, the future of work, as well as the hot new money-making doohickey. But one thing is for certain, with the amount of data required to input into these systems, law enforcement are seeing major opportunities, and our civil liberties will suffer the consequences. In one sense, all of the information needed to, for instance, run a self-driving car, presents a new opportunity for law enforcement to piggyback on new devices covered in cameras, microphones, and sensors to be their eyes and ears on the streets. This is exactly why even at least one U.S. Senator has begun sending letters to car manufacturers hoping to get to the bottom of exactly how much data vehicles, including those deemed autonomous or with “self-driving” modes, collect and who has access to them.

But in another way, the possibility of plugging a vast amount of information into a system and getting automated responses or directives is also rapidly becoming a major problem for innocent people hoping to go un-harassed and un-surveilled by police. So much has been written in the last few years about how predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and just plain-old don’t work. One investigation from the Markup and WIRED found, “Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.”

This year, Georgetown Law’s Center on Privacy and Technology also released an incredible resource: Cop Out. This is a massive and useful  investigation into automation in the criminal justice system and the several moments from policing to parole when a person might have their fate decided by a machine making decisions.

EFF has long called for a ban on predictive policing and commended cities like Santa Cruz when they took that step. The issue became especially important in recent months when Sound Thinking, the company behind ShotSpotter—an acoustic gunshot detection technology that is rife with problems—was reported to be buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab—in 2018. This consolidation of harmful and flawed technologies means it’s even more critical for cities to move swiftly to ban the harmful tactics of both of these technologies.

In 2024, we’ll continue to monitor the rapid rise of police utilizing machine learning, both by canibalizing the data other “autonomous” devices require and by creating or contracting their own algorithms to help guide law enforcement and other branches of the criminal justice system. This year we hope that more cities and states will continue the good work by banning the use of this dangerous technology. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Surveillance and the U.S.-Mexico Border: 2023 Year in Review

21 décembre 2023 à 11:06

The U.S.-Mexico border continues to be one of the most politicized spaces in the country, with leaders in both political parties supporting massive spending on border security, including technological solutions such as the so-called "virtual wall." We spent the year documenting surveillance technologies at the border and the impacts on civil liberties and human rights of those who live in the borderlands.

In early 2023, EFF staff completed the last of three trips to the U.S.-Mexico border, where we met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities.

Using information from those trips, as well as from public records, satellite imagery, and exploration in virtual reality, we released a map and dataset of more than 390 surveillance towers installed by Customs and Border Protection (CBP) along the U.S.-Mexico border. Our data serves as a living snapshot of the so-called "virtual wall," from the California coast to the lower tip of Texas. The data also lays the foundation for many types of research ranging from border policy to environmental impacts.

We also published an in-depth report on Plataforma Centinela (Sentinel Platform), an aggressive new surveillance system developed by Chihuahua state officials in collaboration with a notorious Mexican security contractor. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project is unlike anything seen before along the U.S.-Mexico border. The strategy adopts nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more. It also involves a 20-story high-rise in downtown Ciudad Juarez, known as the Torre Centinela (Sentinel Tower), that will serve as the central node of the surveillance operation. We’ll continue to keep a close eye on the development of this surveillance panopticon.

Finally, we weighed in on the dangers of border surveillance on civil liberties by filing an amicus brief in the U.S. Court of Appeals for the Ninth Circuit. The case, Phillips v. U.S. Customs and Border Protection, was filed after a 2019 news report revealed the federal government was conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases. Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs. The plaintiffs urged the panel to reconsider, or for the full Ninth Circuit to rehear the case. In our amicus brief, we argued that the plaintiffs have privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology. We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm due to lax policies on data sharing, abuse, or data breach.

Undoubtedly, next year’s election will only heighten the focus on border surveillance technologies in 2024. As we’ve seen time and again, increasing surveillance at the border is a bipartisan strategy, and we don’t expect that to change in the new year.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

18 décembre 2023 à 10:12

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Is This the End of Geofence Warrants?

13 décembre 2023 à 19:46

Google announced this week that it will be making several important changes to the way it handles users’ “Location History” data. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls “Location History” data) altogether in a massive database called “Sensorvault.” Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google’s announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user’s device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will “automatically encrypt your backed-up data so no one can read it, including Google.”

All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups (including tech companies) to push for legislative bans on their use.

However, we are not yet prepared to declare total victory. Google’s collection of users’ location data isn’t limited to just the “Location History” data searched in response to geofence warrants; Google collects additional location information as well. It remains to be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google’s changes will prevent law enforcement from issuing targeted warrants for individual users’ location data—outside of Location History—if police have probable cause to support such a search.

But for now, at least, we’ll take this as a win. It’s very welcome news for technology users as we usher in the end of 2023.

U.S. Senator: What Do Our Cars Know? And Who Do They Share that Information With?

1 décembre 2023 à 13:44

U.S. Senator Ed Markey of Massachusetts has sent a much-needed letter to car manufacturers asking them to clarify a surprisingly hard question to answer: what data cars collect? Who has the ability to access that data? Private companies can often be a black box of secrecy that obscure basic facts of the consumer electronics we use. This becomes a massive problem when the devices become more technologically sophisticated and capable of collecting audio, video, geolocation data, as well as biometric information. As the letter says,

As cars increasingly become high-tech computers on wheels, they produce vast amounts of data on drivers, passengers, pedestrians, and other motorists, creating the potential for severe privacy violations. This data could reveal sensitive personal information, including location history and driving behavior, and can help data brokers develop detailed data profiles on users.”

Not only does the letter articulate the privacy harms imposed by vehicles (and trust us, cars are some of the least privacy-oriented devices on the market), it also asks probing questions of companies regarding what data is collected, who has access, particulars about how and for how long data is stored, whether data is sold, and how consumers and the public can go about requesting the deletion of that data.

Also essential are the questions concerning the relationship between car companies and law enforcement. We know, for instance, that self-driving car companies have also built relationships with police and have given footage, on a number of occasions, to law enforcement to aid in investigations. Likewise both Tesla employees and law enforcement had been given or gained access to footage from the electric vehicles.

A push for public transparency by members of Congress is essential and a necessary first step toward some much needed regulation. Self-driving cars, cars with autonomous modes, or even just cars connected to the internet and equipped with cameras pose a vital threat to privacy, not just to drivers and passengers, but also to other motorists on the road and pedestrians who are forced to walk past these cars every day. We commend Senator Markey for this letter and hope that the companies respond quickly and honestly so we can have a better sense of what needs to change. 

You can read the letter here

It’s Time to Oppose the New San Francisco Policing Ballot Measure

9 novembre 2023 à 21:34

San Francisco Mayor London Breed has filed a ballot initiative on surveillance and policing that, if approved, would greatly erode our privacy rights, endanger marginalized communities, and roll back the incredible progress the city has made in creating democratic oversight of police’s use of surveillance technologies. The measure will be up for a vote during the March 5, 2024 election.

Specifically, the ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance which requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before it acquires or deploys new surveillance technologies. Agencies also need to put out a full report to the public about exactly how the technology would be used. This is an important way of making sure people who live or work in the city have a say in policing technologies that could be used in their communities.

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approve by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy any technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

Like many other cities in the United States, this ballot measure would turn San Francisco into a laboratory where police are given free reign to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year in which police could potentially contract with companies that buy up the geolocation data from millions of cellphones and  sift through the data.

In the summer of 2020, in response to a mass Black-led movement against police violence that swept the nation, Mayor Breed said, “If we’re going to make real significant change, we need to fundamentally change the nature of policing itself…Let’s take this momentum and this opportunity at this moment to push for real change.” A central part of that vision was “ending the use of police in response to non-criminal activity; addressing police bias and strengthening accountability; [and] demilitarizing the police.”

It appears that Mayor Breed has turned her back on that stance and, with the introduction of her ballot measure, instead embraced increased surveillance and decreased police accountability. But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

There’s more: this Monday, November 13, 2023 at 10:00am PT, the Rules Committee of the Board of Supervisors will meet to discuss upcoming ballot measures, including this awful policing and surveillance ballot measure. You can watch the Rules Committee meeting here, and most importantly, the live feed will tell you how to call in and give public comment. Tell the Board’s Rules Committee that police should not have free reign to deploy dangerous and untested surveillance technologies in San Francisco . 

VICTORY! California Department of Justice Declares Out-of-State Sharing of License Plate Data Unlawful

California Attorney General Rob Bonta has issued a legal interpretation and guidance for law enforcement agencies around the state that confirms what privacy advocates have been saying for years: It is against the law for police to share data collected from license plate readers with out-of-state or federal agencies. This is an important victory for immigrants, abortion seekers, protesters, and everyone else who drives a car, as our movements expose intimate details about where we’ve been and what we’ve been doing.

Automated license plate readers (ALPRs) are cameras that capture the movements of vehicles and upload the location of the vehicles to a searchable, shareable database. Law enforcement often installs these devices on fixed locations, such as street lights, as well as on patrol vehicles that are used to canvass neighborhoods. It is a mass surveillance technology that collects data on everyone. In fact, EFF research has found that more than 99.9% of the data collected is unconnected to any crime or other public safety interest.

The California State legislature passed SB 34 in 2015 to require basic safeguards for the use of ALPRs. These include a prohibition on California agencies from sharing data with non-California agencies. They also include the publication of a usage policy that is consistent with civil liberties and privacy.

As EFF and other groups such as the ACLU of California, MuckRock News, and the Center for Human Rights and Privacy have demonstrated over and over again through public records requests, many California agencies have either ignored or defied these policies, putting Californians at risk. In some cases, agencies have shared data with hundreds of out-of-state agencies (including in states with abortion restrictions) and with federal agencies (such as U.S. Customs & Border Protection and U.S. Immigration & Customs Enforcement). This surveillance is especially threatening to vulnerable populations, such as migrants and abortion seekers, whose rights are protected in California but not recognized by other states or the federal government.

In 2019, EFF successfully lobbied the legislature to order the California State Auditor to investigate the use of ALPR. The resulting report came out in 2020, with damning findings that agencies were flagrantly violating the law. While state lawmakers have introduced legislation to address the findings, so far no bill has passed. In the absence of new legislative action, Attorney General Bonta's new memo, grounded in SB 34, serves as canon for how local agencies should treat ALPR data.

The bulletin comes after EFF and the California ACLU affiliates sued the Marin County Sheriff in 2021, because his agency was violating SB 34 by sending its ALPR data to federal agencies including ICE and CBP. The case was favorably settled.

Attorney General Bonta’s guidance also follows new advocacy by these groups earlier this year. Along with the ACLU of Northern California and the ACLU of Southern California, EFF released public records from more than 70 law enforcement agencies in California that showed they were sharing data with states that have enacted abortion restrictions. We sent letters to each of the agencies demanding they end the sharing immediately. Dozens complied. Some disagreed with our determination, but nonetheless agreed to pursue new policies to protect abortion access.

Now California’s top law enforcement officer has determined that out-of-state data sharing is illegal and has drafted a model policy. Every agency in California must follow Attorney General Bonta's guidance, review their data sharing, and cut off every out-of-state and federal agency.

Or better yet, they could end their ALPR program altogether.

The State of Chihuahua Is Building a 20-Story Tower in Ciudad Juarez to Surveil 13 Cities–and Texas Will Also Be Watching

EFF Special Advisor Paul Tepper and EFF intern Michael Rubio contributed research to this report.

Chihuahua state officials and a notorious Mexican security contractor broke ground last summer on the Torre Centinela (Sentinel Tower), an ominous, 20-story high-rise in downtown Ciudad Juarez that will serve as the central node of a new AI-enhanced surveillance regime. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project will be unlike anything seen before along the U.S.-Mexico border.

And that's saying a lot, considering the last 30-plus years of surging technology on the U.S side of the border. 

The Torre Centinela will stand in a former parking lot next to the city's famous bullring, a mere half-mile south of where migrants and asylum seekers have camped and protested at the Paso del Norte International Bridge leading to El Paso. But its reach goes much further: the Torre Centinela is just one piece of the Plataforma Centinela (Sentinel Platform), an aggressive new technology strategy developed by Chihuahua's Secretaria de Seguridad Pública Estatal (Secretary of State Public Security or SSPE) in collaboration with the company Seguritech.

With its sprawling infrastructure, the Plataforma Centinela will create an atmosphere of surveillance and data-streams blanketing the entire region. The plan calls for nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more.

If the project comes together as advertised in the Avengers-style trailer that SSPE released to influence public opinion, law enforcement personnel on site will be surrounded by wall-to-wall monitors (140 meters of screens per floor), while 2,000 officers in the field will be able to access live intelligence through handheld tablets.

Texas law enforcement will also have "eyes on this side of the border" via the Plataforma Centinela, Chihuahua Governor Maru Campos publicly stated last year. Texas Governor Greg Abbott signed a memorandum of understanding confirming the partnership.

Plataforma Centinela will transform public life and threaten human rights in the borderlands in ways that aren't easy to assess. Regional newspapers and local advocates–especially Norte Digital and Frente Político Ciudadano para la Defensa de los Derechos Humanos (FPCDDH)--have raised significant concerns about the project, pointing to a low likelihood of success and high potential for waste and abuse.

"It is a myopic approach to security; the full emphasis is placed on situational prevention, while the social causes of crime and violence are not addressed," FPCDDH member and analyst Victor M. Quintana tells EFF, noting that the Plataforma Centinela's budget is significantly higher than what the state devotes to social services. "There are no strategies for the prevention of addiction, neither for rebuilding the fabric of society nor attending to dropouts from school or young people at risk, which are social causes of insecurity."

Instead of providing access to unfiltered information about the project, the State of Chihuahua has launched a public relations blitz. In addition to press conferences and the highly-produced cinematic trailer, SSPE recently hosted a "Pabellón Centinel" (Sentinel Pavillion), a family-friendly carnival where the public was invited to check out a camera wall and drones, while children played with paintball guns, drove a toy ATV patrol vehicle around a model city, and colored in illustrations of a data center operator.

Behind that smoke screen, state officials are doing almost everything they can to control the narrative around the project and avoid public scrutiny.

According to news reports, the SSPE and the Secretaría de Hacienda (Finance Secretary) have simultaneously deemed most information about the project as classified and left dozens of public records requests unanswered. The Chihuahua State Congress also rejected a proposal to formally declassify the documents and stymied other oversight measures, including a proposed audit. Meanwhile, EFF has submitted public records requests to several Texas agencies and all have claimed they have no records related to the Plataforma Centinela.

This is all the more troubling considering the relationship between the state and Seguritech, a company whose business practices in 22 other jurisdictions have been called into question by public officials.

What we can be sure of is that the Plataforma Centinela project may serve as proof of concept of the kind of panopticon surveillance governments can get away with in both North America and Latin America.

What Is the Plataforma Centinela?

High-tech surveillance centers are not a new phenomenon on the Mexican side of the border. These facilities tend to use "C" distinctions to explain their functions and purposes. EFF has mapped out dozens of these in the six Mexican border states.

A screen capture of a Google Map of Mexican C-Centers

Click to explore the map. Google's Privacy Policy applies.

They include:

  • C4 (Centro de Comunicación, Cómputo, Control y Comando) (Center for Communications, Calculation, Control, and Command), 
  • C5 (Centro de Coordinación Integral, de Control, Comando, Comunicación y Cómputo del Estado) (Center for Integral Coordination for Control, Command, Communications, and State Calculation), 
  • C5i (Centro de Control, Comando, Comunicación, Cómputo, Coordinación e Inteligencia) (Center for Control, Command, Communication, Calculation, Coordination and Intelligence).

Typically, these centers focus as a cross between a 911 call center and a real-time crime center, with operators handling emergency calls, analyzing crime data, and controlling a network of surveillance cameras via a wall bank of monitors. In some cases, the Cs may be presented in different order or stand for slightly different words. For example, some C5s might alternately stand for "Centros de Comando, Control, Comunicación, Cómputo y Calidad" (Centers for Command, Control, Communication, Computation and Quality). These facilities also exist in other parts of Mexico. The number of Cs often indicate scale and responsibilities, but more often than not, it seems to be a political or marketing designation.

The Plataforma Centinela however, goes far beyond the scope of previous projects and in fact will be known as the first C7 (Centro de Comando, Cómputo, Control, Coordinación, Contacto Ciudadano, Calidad, Comunicaciones e Inteligencia Artificial) (Center for Command, Calculation, Control, Coordination, Citizen Contact, Quality, Communications and Artificial Intelligence). The Torre Centinela in Ciudad Juarez will serve as the nerve center, with more than a dozen sub-centers throughout the state. 

According to statistics that Gov. Campos disclosed as part of negotiations with Texas and news reports, the Plataforma Centinela will include: 

    • 1,791 automated license plate readers. These are cameras that photograph vehicles and their license plates, then upload that data along with the time and location where the vehicles were seen to a massive searchable database. Law enforcement can also create lists of license plates to track specific vehicles and receive alerts when those vehicles are seen. 
    • 4,800 fixed cameras. These are your run-of-the-mill cameras, positioned to permanently surveil a particular location from one angle.  
    • 3,065 pan-tilt-zoom (PTZ) cameras. These are more sophisticated cameras. While they are affixed to a specific location, such as a street light or a telephone pole, these cameras can be controlled remotely. An operator can swivel the camera around 360-degrees and zoom in on subjects. 
    • 2,000 tablets. Officers in the field will be issued handheld devices for accessing data directly from the Plataforma Centinela
    • 102 security arches. This is a common form of surveillance in Mexico, but not the United States. These are structures built over highways and roads to capture data on passing vehicles and their passengers. 
    • 74 drones (Unmanned Aerial Vehicles/UAVs). While the Chihuahua government has not disclosed what surveillance payload will be attached to these drones, it is common for law enforcement drones to deploy video, infrared, and thermal imaging technology.
    • 40 mobile video surveillance trailers. While details on these systems are scant, it is likely these are camera towers that can be towed to and parked at targeted locations. 
    • 15 anti-drone systems. These systems are designed to intercept and disable drones operated by criminal organizations.
    • Face recognition. The project calls for the application of "biometric filters" to be applied to camera feeds "to assist in the capture of cartel leaders," and the collection of migrant biometrics. Such a system would require scanning the faces of the general public.
    • Artificial intelligence. So far, the administration has thrown around the term AI without fully explaining how it will be used. However, typically law enforcement agencies have used this technology to "predict" where crime might occur, identify individuals mostly likely to be connected to crime, and to surface potential connections between suspects that would not have been obvious to a human observer. However, all these technologies have a propensity for making errors or exacerbating existing bias. 

As of May, 60% of the Plataforma Centinela camera network had been installed, with an expected completion date of December, according to Norte Digital. However, the cameras were already being used in criminal investigations. 

All combined, this technology amounts to an unprecedented expansion of the surveillance state in Latin America, as SSPE brags in its promotional material. The threat to privacy may also be unprecedented: creating cities where people can no longer move freely in their communities without being watched, scanned, and tagged.

But that's assuming the system functions as advertised—and based on the main contractor's history, that's anything but guaranteed. 

Who Is Seguritech?

The Plataforma Centinela project is being built by the megacorporation Seguritech, which has signed deals with more than a dozen government entities throughout Mexico. As of 2018, the company received no-bid contracts in at least 10 Mexican states and cities, which means it was able to sidestep the accountability process that requires companies to compete for projects.

And when it comes to the Plataforma Centinela, the company isn't simply a contractor: It will actually have ownership over the project, the Torre Centinela, and all its related assets, including cameras and drones, until August 2027.

That's what SSPE Secretary Gilberto Loya Chávez told the news organization Norte Digital, but the terms of the agreement between Seguritech and Chihuahua's administration are not public. The SSPE's Transparency Committee decided to classify the information "concerning the procedures for the acquisition of supplies, goods, and technology necessary for the development, implementation, and operation of the Platforma Centinela" for five years.

In spite of the opacity shrouding the project, journalists have surfaced some information about the investment plan. According to statements from government officials, the Plataforma Centinela will cost 4.2 billion pesos, with Chihuahua's administration paying regular installments to the company every three months (Chihuahua's governor had previously said that these would be yearly payments in the amount of 700 million to 1 billion pesos per year). According to news reports, when the payments are completed in 2027, the ownership of the platform's assets and infrastructure are expected to pass from Seguritech to the state of Chihuahua.

The Plataforma Centinela project marks a new pinnacle in Seguritech's trajectory as a Mexican security contractor. Founded in 1995 as a small business selling neighborhood alarms, SeguriTech Privada S.A de C.V. became a highly profitable brand, and currently operates in five areas: security, defense, telecommunications, aeronautics, and construction. According to Zeta Tijuana, Seguritech also secures contracts through its affiliated companies, including Comunicación Segura (focused on telecommunications and security) and Picorp S.A. de C.V. (focused on architecture and construction, including prisons and detention centers). Zeta also identified another SecuriTech company, Tres10 de C.V., as the contractor named in various C5i projects.

Thorough reporting by Mexican outlets such as Proceso, Zeta Tijuana, Norte Digital, and Zona Free paint an unsettling picture of Seguritech's activities over the years.

Former President Felipe Calderón's war on drug trafficking, initiated during his 2006-2012 term, marked an important turning point for surveillance in Mexico. As Proceso reported, Seguritech began to secure major government contracts beginning in 2007, receiving its first billion-peso deal in 2011 with Sinaloa's state government. In 2013, avoiding the bidding process, the company secured a 6-billion peso contract assigned by Eruviel Ávila, then governor of the state of México (or Edomex, not to be confused with the country of Mexico). During Enrique Peña Nieto's years as Edomex's governor, and especially later, as Mexico's president, Seguritech secured its status among Mexico's top technology contractors.

According to Zeta Tijuana, during the six years that Peña Nieto served as president (2012-2018), the company monopolized contracts for the country's main surveillance and intelligence projects, specifically the C5i centers. As Zeta Tijuana writes:

"More than 10 C5i units were opened or began construction during Peña Nieto's six-year term. Federal entities committed budgets in the millions, amid opacity, violating parliamentary processes and administrative requirements. The purchase of obsolete technological equipment was authorized at an overpriced rate, hiding information under the pretext of protecting national security."

Zeta Tijuana further cites records from the Mexican Institute of Industrial Property showing that Seguritech registered the term "C5i" as its own brand, an apparent attempt to make it more difficult for other surveillance contractors to provide services under that name to the government.

Despite promises from government officials that these huge investments in surveillance would improve public safety, the country’s number of violent deaths increased during Peña Nieto's term in office.

"What is most shocking is how ineffective Seguritech's system is," says Quintana, the spokesperson for FPCDDH. By his analysis, Quintana says, "In five out of six states where Seguritech entered into contracts and provided security services, the annual crime rate shot up in proportions ranging from 11% to 85%."

Seguritech has also been criticized for inflated prices, technical failures, and deploying obsolete equipment. According to Norte Digital, only 17% of surveillance cameras were working by the end of the company's contract with Sinaloa's state government. Proceso notes the rise of complaints about the malfunctioning of cameras in Cuauhtémoc Delegation (a borough of Mexico City) in 2016. Zeta Tijuana reported on the disproportionate amount the company charged for installing 200 obsolete 2-megapixel cameras in 2018.

Seguritech's track record led to formal complaints and judicial cases against the company. The company has responded to this negative attention by hiring services to take down and censor critical stories about its activities published online, according to investigative reports published as part of the Global Investigative Journalism Network's Forbidden Stories project.

Yet, none of this information dissuaded Chihuahua's governor, Maru Campos, from closing a new no-bid contract with Seguritech to develop the Plataforma Centinela project. 

 A Cross-Border Collaboration 


The Plataforma Centinela project presents a troubling escalation in cross-border partnerships between states, one that cuts out each nation's respective federal governments.  In April 2022, the states of Texas and Chihuahua signed a memorandum of understanding to collaborate on reducing "cartels' human trafficking and smuggling of deadly fentanyl and other drugs" and to "stop the flow of migrants from over 100 countries who illegally enter Texas through Chihuahua."

A slide describing the "New Border Model"

While much of the agreement centers around cargo at the points of entry, the document also specifically calls out the various technologies that make up the Plataforma Centinela. In attachments to the agreement, Gov. Campos promises Chihuahua is "willing to share that information with Texas State authorities and commercial partners directly."

During a press conference announcing the MOU, Gov. Abbot declared, “Governor Campos has provided me with the best border security plan that I have seen from any governor from Mexico.” He held up a three-page outline and a slide, which were also provided to the public, but also referenced the existence of "a much more extensive detailed memo that explains in nuance" all the aspects of the program.

Abbott went on to read out a summary of Plataforma Centinela, adding, "This is a demonstration of commitment from a strong governor who is working collaboratively with the state of Texas."

Then Campos, in response to a reporter's question, added: "We are talking about sharing information and intelligence among states, which means the state of Texas will have eyes on this side of the border." She added that the data collected through the Plataforma Centinela will be analyzed by both the states of Chihuahua and Texas.

Abbott provided an example of one way the collaboration will work: "We will identify hotspots where there will be an increase in the number of migrants showing up because it's a location chosen by cartels to try to put people across the border at that particular location. The Chihuahua officials will work in collaboration with the Texas Department of Public Safety, where DPS has identified that hotspot and the Chihuahua side will work from a law enforcement side to disrupt that hotspot."

In order to learn more about the scope of the project, EFF sent public records requests to several Texas agencies, including the Governor's Office, the Texas Department of Public Safety, the Texas Attorney General's Office, the El Paso County Sheriff, and the El Paso Police Department. Not one of the agencies produced records related to the Plataforma Centinela project.

Meanwhile, Texas is further beefing up its efforts to use technology at the border, including by enacting new laws that formally allow the Texas National Guard and State Guard to deploy drones at the border and authorize the governor to enter compacts with other states to share intelligence and resource to build "a comprehensive technological surveillance system" on state land to deter illegal activity at the border. In addition to the MOU with Chihuahua, Abbott also signed similar agreements with the states of Nuevo León and Coahuila in 2022. 

Two Sides, One Border

The Plataforma Centinela has enormous potential to violate the rights of one of the largest cross-border populations along the U.S.-Mexico border. But while law enforcement officials are eager to collaborate and traffic data back and forth, advocacy efforts around surveillance too often are confined to their respective sides.

The Spanish-language press in Mexico has devoted significant resources to investigating the Plataforma Centinela and raising the alarm over its lack of transparency and accountability, as well as its potential for corruption. Yet, the project has received virtually no attention or scrutiny in the United States. 

Fighting back against surveillance of cross-border communities requires cross-border efforts. EFF supports the efforts of advocacy groups in Ciudad Juarez and other regions of Chihuahua to expose the mistakes the Chihuahua government is making with the Plataforma Centinela and call out its mammoth surveillance approach for failing to address the root social issues. We also salute the efforts by local journalists to hold the government accountable. However, U.S-based journalists, activists, and policymakers—many of whom have done an excellent job surfacing criticism of Customs and Border Protection's so-called virtual wall—must also turn their attention to the massive surveillance that is building up on the Mexican side.

In reality, there really is no Mexican surveillance and U.S. surveillance. It’s one massive surveillance monster that, ironically, in the name of border enforcement, recognizes no borders itself. 

❌
❌