Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 12 mai 2024Flux principal

What Can Go Wrong When Police Use AI to Write Reports?

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

À partir d’avant-hierFlux principal

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 mars 2024 à 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests

19 mars 2024 à 18:55

The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.  

The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city. 

According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.” 

Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight. 

Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.” 

You can read the lawsuit here. 

San Diego City Council Breaks TRUST

15 mars 2024 à 14:54

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said: 

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

The Atlas of Surveillance Removes Ring, Adds Third-Party Investigative Platforms

Running the Atlas of Surveillance, our project to map and inventory police surveillance across the United States, means experiencing emotional extremes.

Whenever we announce that we've added new data points to the Atlas, it comes with a great sense of satisfaction. That's because it almost always means that we're hundreds or even thousands of steps closer to achieving what only a few years ago would've seemed impossible: comprehensively documenting the surveillance state through our partnership with students at the University of Nevada, Reno Reynolds School of Journalism.

At the same time, it's depressing as hell. That's because it also reflects how quickly and dangerously the surveillance technology is metastasizing.

We have the exact opposite feeling when we remove items from the Atlas of Surveillance. It's a little sad to see our numbers drop, but at the same time that change in data usually means that a city or county has eliminated a surveillance program.

That brings us to the biggest change in the Atlas since our launch in 2018. This week, we removed 2,530 data points: an entire category of surveillance. With the announcement from Amazon that its home surveillance company Ring will no longer facilitate warrantless requests for consumer video footage, we've decided to sunset that particular dataset.

While law enforcement agencies still maintain accounts on Ring's Neighbors social network, it seems to serve as a communications tool, a function on par with services like Nixle and Citizen, which we currently don't capture in the Atlas. That's not to say law enforcement won't be gathering footage from Ring cameras: they will, through legal process or by directly asking residents to give them access via the Fusus platform. But that type of surveillance doesn't result from merely having a Neighbors account (agencies without accounts can use these methods to obtain footage), which was what our data documented. You can still find out which agencies are maintaining camera registries through the Atlas. 

Ring's decision was a huge victory – and the exact outcome EFF and other civil liberties groups were hoping for. It also has opened up our capacity to track other surveillance technologies growing in use by law enforcement. If we were going to remove a category, we decided we should add one too.

Atlas of Surveillance users will now see a new type of technology: Third-Party Investigative Platforms, or TPIPs. Commons TPIP products include Thomson Reuters CLEAR, LexisNexis Accurint Virtual Crime Center, TransUnion TLOxp, and SoundThinking CrimeTracer (formerly Coplink X from Forensic Logic). These are technologies we've been watching for awhile, but have been struggling to categorize and define. But here's the definition we've come up with:

Third-Party Investigative Platforms are cloud-based software systems that law enforcement agencies subscribe to in order to access, share, mine, and analyze various sources of investigative data. Some of the data the agencies upload themselves, but the systems also provide access to data from other law enforcement, as well as from commercial sources and data brokers. Many products offer AI features, such as pattern identification, face recognition, and predictive analytics. Some agencies employ multiple TPIPs.

We are calling this new category a beta feature in the Atlas, since we are still figuring out how best to research and compile this data nationwide. You'll find fairly comprehensive data on the use of CrimeTracer in Tennessee and Massachusetts, because both states provide the software to local law enforcement agencies throughout the state. Similarly, we've got a large dataset for the use of the Accurint Virtual Crime Center in Colorado, due to a statewide contract. (Big thanks to Prof. Ran Duan's Data Journalism students for working with us to compile those lists!) We've also added more than 60 other agencies around the country, and we expect that dataset to grow as we hone our research methods.

If you've got information on the use of TPIPs in your area, don't hesitate to reach out. You can email us at aos@eff.org, submit a tip through our online form, or file a public records request using the template that EFF and our students have developed to reveal the use of these platforms. 

We Flew a Plane Over San Francisco to Fight Proposition E. Here's Why.

29 février 2024 à 15:19

Proposition E, which San Franciscans will be asked to vote on in the March 5 election, is so dangerous that last weekend we chartered a plane to inform our neighbors about what the ballot measure does and urge them to vote NO on it. If you were in Dolores Park, Golden Gate Park, Chinatown, or anywhere in between on Saturday, there’s a chance you saw it, with a huge banner flying through the sky: “No Surveillance State! No on Prop E.”

Despite the fact that the San Francisco Chronicle has endorsed a NO vote on Prop E, and even quoted some police who don’t find its changes useful to keeping the public safe, proponents of Prop E have raised over $1 million to push this unnecessary, ill-thought out, and downright dangerous ballot measure.

San Francisco, Say NOPE: Vote NO on Prop E on March 5

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

What Does Prop E Do?

Prop E is a haphazard mess of proposals that tries to capitalize on residents’ fear of crime in an attempt to gut commonsense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the civilian-staffed Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Prop E would also amend existing law passed in 2019 to protect San Franciscans from invasive, untested, or biased police surveillance technologies. Currently, if the SFPD wants to acquire a new technology, they must provide a detailed use policy to the democratically-elected Board of Supervisors, in a process that allows for public comment. The Board then votes on whether and how the police can use the technology.

Prop E guts these protective measures designed to bring communities into the conversation about public safety. If Prop E passes on March 5, then the SFPD can unilaterally use any technology they want for a full year without the Board’s approval, without publishing an official policy about how they’d use the technology, and without allowing community members to voice their concerns.

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

Why is Prop E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency, accountability, or democratic control.

San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Prop E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

What Technology Would Prop E Allow Police to Use?

That's the thing—we don't know, and if Prop E passes, we may never know. Today, if the SFPD decides to use a piece of surveillance technology, there is a process for sharing that information with the public. With Prop E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. 

Even though we don't know what technologies the SFPD is eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And according to the City Attorney, Prop E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology. San Francisco currently has a ban on police using remote-controlled robots to deploy deadly force, but if passed, Prop E would allow police to invest in technologies like taser-armed drones without any oversight or potential for elected officials to block the sale. 

Don’t let police experiment on San Franciscans with dangerous, untested surveillance technologies. Say NOPE to a surveillance state. Vote NO on Prop E on March 5.  

What is Proposition E and Why Should San Francisco Voters Oppose It?

2 février 2024 à 18:39

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

San Francisco Police’s Live Surveillance Yields Almost 200 Hours of Spying–Including of Music Festivals

A new report reveals that in just three months, from July 1 to September 30, 2023,  the San Francisco Police Department (SFPD) racked up 193 hours and 19 minutes of live access to non-city surveillance cameras. That means for the equivalent of 8 days, police sat behind a desk and tapped into hundreds of cameras, ostensibly including San Francisco’s extensive semi-private security camera networks, to watch city residents, workers, and visitors live. An article by the San Francisco Chronicle analyzing the report also uncovered that the SFPD tapped into these cameras to watch 42 hours of live footage during the Outside Lands music festival.

The city’s Board of Supervisors granted police permission to get live access to these cameras in September 2022 as part of a 15-month pilot program to see if allowing police to conduct widespread, live surveillance would create more safety for all people. However, even before this legislation’s passage, the SFPD covertly used non-city security cameras to monitor protests and other public events. In fact, police and the rich man who funded large networks of semi-private surveillance cameras both claimed publicly that the police department could easily access historic footage of incidents after the fact to help build cases, but could not peer through the cameras live. This claim was debunked by EFF and other investigators who revealed that police requested live access to semi-private cameras to monitor protests, parades, and public events—despite being the type of activity protected by the First Amendment.

When the Board of Supervisors passed this ordinance, which allowed police live access to non-city cameras for criminal investigations (for up to 24 hours after an incident) and for large-scale events, we warned that police would use this newfound power to put huge swaths of the city under surveillance—and we were unfortunately correct.

The most egregious example from the report is the 42 hours of live surveillance conducted during the Outside Lands music festival, which yielded five arrests for theft, pickpocketing, and resisting arrest—and only one of which resulted in the District Attorney’s office filing charges. Despite proponents’ arguments that live surveillance would promote efficiency in policing, in this case, it resulted in a massive use of police resources with little to show for it.

There still remain many unanswered questions about how the police are using these cameras. As the Chronicle article recognized:

…nearly a year into the experiment, it remains unclear just how effective the strategy of using private cameras is in fighting crime in San Francisco, in part because the Police Department’s disclosures don’t provide information on how live footage was used, how it led to arrests and whether police could have used other methods to make those arrests.

The need for greater transparency—and at minimum, for the police to follow all reporting requirements mandated by the non-city surveillance camera ordinance—is crucial to truly evaluate the impact that access to live surveillance has had on policing. In particular, the SFPD’s data fails to make clear how live surveillance helps police prevent or solve crimes in a way that footage after the fact does not. 

Nonetheless, surveillance proponents tout this report as showing that real-time access to non-city surveillance cameras is effective in fighting crime. Many are using this to push for a measure on the March 5, 2024 ballot, Proposition E, which would roll back police accountability measures and grant even more surveillance powers to the SFPD. In particular, Prop E would allow the SFPD a one-year pilot period to test out any new surveillance technology, without any use policy or oversight by the Board of Supervisors. As we’ve stated before, this initiative is bad all around—for policing, for civil liberties, and for all San Franciscans.

Police in San Francisco still don’t get it. They can continue to heap more time, money, and resources into fighting oversight and amassing all sorts of surveillance technology—but at the end of the day, this still won’t help combat the societal issues the city faces. Technologies touted as being useful in extreme cases will just end up as an oversized tool for policing misdemeanors and petty infractions, and will undoubtedly put already-marginalized communities further under the microscope. Just as it’s time to continue asking questions about what live surveillance helps the SFPD accomplish, it’s also time to oppose the erosion of existing oversight by voting NO on Proposition E on March 5. 

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

25 janvier 2024 à 13:14

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

play
Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

24 janvier 2024 à 14:09

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

The Atlas of Surveillance Hits Major Milestones: 2023 in Review

28 décembre 2023 à 11:24

"The EFF are relentless."

That's what a New York Police Department lieutenant wrote on LinkedIn after someone sent him a link to the Atlas of Surveillance, EFF's moonshot effort to document which U.S. law enforcement agencies are using which technologies, including drones, automated license plate readers and face recognition. Of course, the lieutenant then went on to attack us with unsubstantiated accusations of misinformation — but we take it all as a compliment.

If you haven't checked out the Atlas of Surveillance recently, or ever before, you absolutely should. It includes a searchable database and an interactive map, and anyone can download the data for their own projects. As this collaboration with the University of Nevada Reno's Reynolds School of Journalism (RSJ) finishes its fifth year, we are proud to announce that we've hit a major milestone: more than 12,000 data points that document the use of police surveillance nationwide, all collected using open-source investigative techniques, data journalism, and public records requests.

We’ve come a long way since the Atlas of Surveillance launched as a pilot project with RSJ back in the spring semester of 2019. By that summer, with the help of a few dozen journalism students, we had accumulated 250 data points, focused on the 23 counties along the U.S.-Mexico border. When we launched the formal website in 2020, we had collected a little more than 5,500 data points. Today's dataset represents more than a 100% increase since then.

That isn't the only major milestone we accomplished this year. To collect data for the project, EFF and RSJ designed a tool called Report Back, which allows us to distribute micro-research assignments (about 10-20 minutes each) to students in our classes. This winter, the 3,000th assignment was completed using Report Back.

This year we also dug into one particular technology. As part of our Atlas efforts, we began to see Fusus—a company working to bring real-time surveillance to local police departments via camera registries and real-time crime centers—appear more frequently as a tool used by law enforcement. In collaboration with the Thomson Reuters Foundation, we decided to do a deeper dive into the adoption of Fusus, and the Atlas has served as a resource for other reporters working to investigate this company in their own towns and across the country.

We’re proud to have built the Atlas because it’s meant to be a tool for the public, and we're excited to see more and more people are discovering it. This year, we clocked about 250,000 pageviews, more than double what we've seen in previous years. This tells us not only that more people care about police surveillance than ever before, but that we're better able to inform them about what's happening locally in their communities. The top 20 jurisdictions with the most traffic for include:

  1. Phoenix, Ariz.
  2. Chicago, Ill.
  3. Los Angeles, Calif.
  4. Atlanta, Ga.
  5. New York City, N.Y.
  6. Austin, Texas
  7. Houston, Texas
  8. San Antonio, Texas
  9. Seattle, Wash.
  10. Columbus, Ohio  
  11. Las Vegas, Nev.
  12. Dallas, Texas
  13. Philadelphia, Penn.
  14. Denver, Colo. 
  15. Tampa, Fla.
  16. West Bloomfield, Mich.
  17. Portland, Ore.
  18. San Diego, Calif.
  19. Nashville, Tenn.
  20. Pittsburgh, Penn. 

One of the primary goals of the Atlas of Surveillance project is to reach journalists, academics, activists, and policymakers, so they can use our data to better inform their research. In this sense, 2023 was a huge success. Here are some of our favorite projects that used Atlas of Surveillance data this year:

  • Social justice advocates were trained on how to use the Atlas of Surveillance in a workshop titled "Data Brokers & Modern Surveillance: Dangers for Marginalized People" at an annual Friends (Quakers) conference. 
  • A team of master’s students at the University of Amsterdam built a website called "Beyond the Lens" that analyzes the police surveillance industry using primary data from the Atlas of Surveillance. 
  • The Markup combined Atlas data with census data, crime data, and emails obtained through the California Public Records Act to investigate the Los Angeles Police Department's relationship with Ring, Amazon's home video surveillance subsidiary. 

The Atlas has also been cited in government proceedings and court briefs:

The Atlas made appearances in many academic and legal scholarship publications in 2023, including:

Meanwhile, print, radio, and television journalists continue to turn to the Atlas as a resource, either to build stories about police surveillance or provide context. This year, these have included:

Activists, advocates, and concerned citizens around the nation have also used the Atlas of Surveillance to support their actions against expansion of surveillance:

These victories wouldn't be possible without the students at RSJ, especially our 2023 interns Haley Ekberg, Kieran Dazzo, Dez Peltzer, and Colin Brandes. We also owe thanks to lecturers Paro Pain, Ran Duan, Jim Scripps, and Patrick File for sharing their classrooms with us.

In 2024, EFF will expand the Atlas to capture more technologies used by law enforcement agencies. We are also planning new features, functions and fixes that allow users to better browse and analyze the data.  And of course, you should keep an eye out in the new year for new workshops, talks, and other opportunities to learn more and get involved with the project.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Artificial Intelligence and Policing: Year in Review 2023

23 décembre 2023 à 12:33

Machine learning, artificial intelligence, algorithmic decision making–regardless of what you call it, and there is hot debate over that, this technology has been touted as a supposed threat to humanity, the future of work, as well as the hot new money-making doohickey. But one thing is for certain, with the amount of data required to input into these systems, law enforcement are seeing major opportunities, and our civil liberties will suffer the consequences. In one sense, all of the information needed to, for instance, run a self-driving car, presents a new opportunity for law enforcement to piggyback on new devices covered in cameras, microphones, and sensors to be their eyes and ears on the streets. This is exactly why even at least one U.S. Senator has begun sending letters to car manufacturers hoping to get to the bottom of exactly how much data vehicles, including those deemed autonomous or with “self-driving” modes, collect and who has access to them.

But in another way, the possibility of plugging a vast amount of information into a system and getting automated responses or directives is also rapidly becoming a major problem for innocent people hoping to go un-harassed and un-surveilled by police. So much has been written in the last few years about how predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and just plain-old don’t work. One investigation from the Markup and WIRED found, “Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.”

This year, Georgetown Law’s Center on Privacy and Technology also released an incredible resource: Cop Out. This is a massive and useful  investigation into automation in the criminal justice system and the several moments from policing to parole when a person might have their fate decided by a machine making decisions.

EFF has long called for a ban on predictive policing and commended cities like Santa Cruz when they took that step. The issue became especially important in recent months when Sound Thinking, the company behind ShotSpotter—an acoustic gunshot detection technology that is rife with problems—was reported to be buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab—in 2018. This consolidation of harmful and flawed technologies means it’s even more critical for cities to move swiftly to ban the harmful tactics of both of these technologies.

In 2024, we’ll continue to monitor the rapid rise of police utilizing machine learning, both by canibalizing the data other “autonomous” devices require and by creating or contracting their own algorithms to help guide law enforcement and other branches of the criminal justice system. This year we hope that more cities and states will continue the good work by banning the use of this dangerous technology. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Surveillance and the U.S.-Mexico Border: 2023 Year in Review

21 décembre 2023 à 11:06

The U.S.-Mexico border continues to be one of the most politicized spaces in the country, with leaders in both political parties supporting massive spending on border security, including technological solutions such as the so-called "virtual wall." We spent the year documenting surveillance technologies at the border and the impacts on civil liberties and human rights of those who live in the borderlands.

In early 2023, EFF staff completed the last of three trips to the U.S.-Mexico border, where we met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities.

Using information from those trips, as well as from public records, satellite imagery, and exploration in virtual reality, we released a map and dataset of more than 390 surveillance towers installed by Customs and Border Protection (CBP) along the U.S.-Mexico border. Our data serves as a living snapshot of the so-called "virtual wall," from the California coast to the lower tip of Texas. The data also lays the foundation for many types of research ranging from border policy to environmental impacts.

We also published an in-depth report on Plataforma Centinela (Sentinel Platform), an aggressive new surveillance system developed by Chihuahua state officials in collaboration with a notorious Mexican security contractor. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project is unlike anything seen before along the U.S.-Mexico border. The strategy adopts nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more. It also involves a 20-story high-rise in downtown Ciudad Juarez, known as the Torre Centinela (Sentinel Tower), that will serve as the central node of the surveillance operation. We’ll continue to keep a close eye on the development of this surveillance panopticon.

Finally, we weighed in on the dangers of border surveillance on civil liberties by filing an amicus brief in the U.S. Court of Appeals for the Ninth Circuit. The case, Phillips v. U.S. Customs and Border Protection, was filed after a 2019 news report revealed the federal government was conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases. Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs. The plaintiffs urged the panel to reconsider, or for the full Ninth Circuit to rehear the case. In our amicus brief, we argued that the plaintiffs have privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology. We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm due to lax policies on data sharing, abuse, or data breach.

Undoubtedly, next year’s election will only heighten the focus on border surveillance technologies in 2024. As we’ve seen time and again, increasing surveillance at the border is a bipartisan strategy, and we don’t expect that to change in the new year.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

18 décembre 2023 à 10:12

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Is This the End of Geofence Warrants?

13 décembre 2023 à 19:46

Google announced this week that it will be making several important changes to the way it handles users’ “Location History” data. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls “Location History” data) altogether in a massive database called “Sensorvault.” Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google’s announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user’s device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will “automatically encrypt your backed-up data so no one can read it, including Google.”

All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups (including tech companies) to push for legislative bans on their use.

However, we are not yet prepared to declare total victory. Google’s collection of users’ location data isn’t limited to just the “Location History” data searched in response to geofence warrants; Google collects additional location information as well. It remains to be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google’s changes will prevent law enforcement from issuing targeted warrants for individual users’ location data—outside of Location History—if police have probable cause to support such a search.

But for now, at least, we’ll take this as a win. It’s very welcome news for technology users as we usher in the end of 2023.

U.S. Senator: What Do Our Cars Know? And Who Do They Share that Information With?

1 décembre 2023 à 13:44

U.S. Senator Ed Markey of Massachusetts has sent a much-needed letter to car manufacturers asking them to clarify a surprisingly hard question to answer: what data cars collect? Who has the ability to access that data? Private companies can often be a black box of secrecy that obscure basic facts of the consumer electronics we use. This becomes a massive problem when the devices become more technologically sophisticated and capable of collecting audio, video, geolocation data, as well as biometric information. As the letter says,

As cars increasingly become high-tech computers on wheels, they produce vast amounts of data on drivers, passengers, pedestrians, and other motorists, creating the potential for severe privacy violations. This data could reveal sensitive personal information, including location history and driving behavior, and can help data brokers develop detailed data profiles on users.”

Not only does the letter articulate the privacy harms imposed by vehicles (and trust us, cars are some of the least privacy-oriented devices on the market), it also asks probing questions of companies regarding what data is collected, who has access, particulars about how and for how long data is stored, whether data is sold, and how consumers and the public can go about requesting the deletion of that data.

Also essential are the questions concerning the relationship between car companies and law enforcement. We know, for instance, that self-driving car companies have also built relationships with police and have given footage, on a number of occasions, to law enforcement to aid in investigations. Likewise both Tesla employees and law enforcement had been given or gained access to footage from the electric vehicles.

A push for public transparency by members of Congress is essential and a necessary first step toward some much needed regulation. Self-driving cars, cars with autonomous modes, or even just cars connected to the internet and equipped with cameras pose a vital threat to privacy, not just to drivers and passengers, but also to other motorists on the road and pedestrians who are forced to walk past these cars every day. We commend Senator Markey for this letter and hope that the companies respond quickly and honestly so we can have a better sense of what needs to change. 

You can read the letter here

It’s Time to Oppose the New San Francisco Policing Ballot Measure

9 novembre 2023 à 21:34

San Francisco Mayor London Breed has filed a ballot initiative on surveillance and policing that, if approved, would greatly erode our privacy rights, endanger marginalized communities, and roll back the incredible progress the city has made in creating democratic oversight of police’s use of surveillance technologies. The measure will be up for a vote during the March 5, 2024 election.

Specifically, the ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance which requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before it acquires or deploys new surveillance technologies. Agencies also need to put out a full report to the public about exactly how the technology would be used. This is an important way of making sure people who live or work in the city have a say in policing technologies that could be used in their communities.

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approve by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy any technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

Like many other cities in the United States, this ballot measure would turn San Francisco into a laboratory where police are given free reign to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year in which police could potentially contract with companies that buy up the geolocation data from millions of cellphones and  sift through the data.

In the summer of 2020, in response to a mass Black-led movement against police violence that swept the nation, Mayor Breed said, “If we’re going to make real significant change, we need to fundamentally change the nature of policing itself…Let’s take this momentum and this opportunity at this moment to push for real change.” A central part of that vision was “ending the use of police in response to non-criminal activity; addressing police bias and strengthening accountability; [and] demilitarizing the police.”

It appears that Mayor Breed has turned her back on that stance and, with the introduction of her ballot measure, instead embraced increased surveillance and decreased police accountability. But there is something we can do about this! It’s time to get the word out about what’s at stake during the March 5, 2024 election and urge voters to say NO to increased surveillance and decreased police accountability.

There’s more: this Monday, November 13, 2023 at 10:00am PT, the Rules Committee of the Board of Supervisors will meet to discuss upcoming ballot measures, including this awful policing and surveillance ballot measure. You can watch the Rules Committee meeting here, and most importantly, the live feed will tell you how to call in and give public comment. Tell the Board’s Rules Committee that police should not have free reign to deploy dangerous and untested surveillance technologies in San Francisco . 

VICTORY! California Department of Justice Declares Out-of-State Sharing of License Plate Data Unlawful

California Attorney General Rob Bonta has issued a legal interpretation and guidance for law enforcement agencies around the state that confirms what privacy advocates have been saying for years: It is against the law for police to share data collected from license plate readers with out-of-state or federal agencies. This is an important victory for immigrants, abortion seekers, protesters, and everyone else who drives a car, as our movements expose intimate details about where we’ve been and what we’ve been doing.

Automated license plate readers (ALPRs) are cameras that capture the movements of vehicles and upload the location of the vehicles to a searchable, shareable database. Law enforcement often installs these devices on fixed locations, such as street lights, as well as on patrol vehicles that are used to canvass neighborhoods. It is a mass surveillance technology that collects data on everyone. In fact, EFF research has found that more than 99.9% of the data collected is unconnected to any crime or other public safety interest.

The California State legislature passed SB 34 in 2015 to require basic safeguards for the use of ALPRs. These include a prohibition on California agencies from sharing data with non-California agencies. They also include the publication of a usage policy that is consistent with civil liberties and privacy.

As EFF and other groups such as the ACLU of California, MuckRock News, and the Center for Human Rights and Privacy have demonstrated over and over again through public records requests, many California agencies have either ignored or defied these policies, putting Californians at risk. In some cases, agencies have shared data with hundreds of out-of-state agencies (including in states with abortion restrictions) and with federal agencies (such as U.S. Customs & Border Protection and U.S. Immigration & Customs Enforcement). This surveillance is especially threatening to vulnerable populations, such as migrants and abortion seekers, whose rights are protected in California but not recognized by other states or the federal government.

In 2019, EFF successfully lobbied the legislature to order the California State Auditor to investigate the use of ALPR. The resulting report came out in 2020, with damning findings that agencies were flagrantly violating the law. While state lawmakers have introduced legislation to address the findings, so far no bill has passed. In the absence of new legislative action, Attorney General Bonta's new memo, grounded in SB 34, serves as canon for how local agencies should treat ALPR data.

The bulletin comes after EFF and the California ACLU affiliates sued the Marin County Sheriff in 2021, because his agency was violating SB 34 by sending its ALPR data to federal agencies including ICE and CBP. The case was favorably settled.

Attorney General Bonta’s guidance also follows new advocacy by these groups earlier this year. Along with the ACLU of Northern California and the ACLU of Southern California, EFF released public records from more than 70 law enforcement agencies in California that showed they were sharing data with states that have enacted abortion restrictions. We sent letters to each of the agencies demanding they end the sharing immediately. Dozens complied. Some disagreed with our determination, but nonetheless agreed to pursue new policies to protect abortion access.

Now California’s top law enforcement officer has determined that out-of-state data sharing is illegal and has drafted a model policy. Every agency in California must follow Attorney General Bonta's guidance, review their data sharing, and cut off every out-of-state and federal agency.

Or better yet, they could end their ALPR program altogether.

❌
❌