Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Montana Becomes First State to Close the Law Enforcement Data Broker Loophole

Montana has done something that many states and the United States Congress have debated but failed to do: it has just enacted the first attempt to close the dreaded, invasive, unconstitutional, but easily fixed “data broker loophole.” This is a very good step in the right direction because right now, across the country, law enforcement routinely purchases information on individuals it would otherwise need a warrant to obtain.

What does that mean? In every state other than Montana, if police want to know where you have been, rather than presenting evidence and sending a warrant signed by a judge to a company like Verizon or Google to get your geolocation data for a particular set of time, they only need to buy that same data from data brokers. In other words, all the location data apps on your phone collect —sometimes recording your exact location every few minutes—is just sitting for sale on the open market. And police routinely take that as an opportunity to skirt your Fourth Amendment rights.

Now, with SB 282, Montana has become the first state to close the data broker loophole. This means the government may not use money to get access to information about electronic communications (presumably metadata), the contents of electronic communications, contents of communications sent by a tracking devices, digital information on electronic funds transfers, pseudonymous information, or “sensitive data”, which is defined in Montana as information about a person’s private life, personal associations, religious affiliation, health status, citizen status, biometric data, and precise geolocation. This does not mean information is now fully off limits to police. There are other ways for law enforcement in Montana to gain access to sensitive information: they can get a warrant signed by a judge, they can get consent of the owner to search a digital device, they can get an “investigative subpoena” which unfortunately requires far less justification than an actual warrant.

Despite the state’s insistence on honoring lower-threshold subpoena usage, SB 282 is not the first time Montana has been ahead of the curve when it comes to passing privacy-protecting legislation. For the better part of a decade, the Big Sky State has seriously limited the use of face recognition, passed consumer privacy protections, added an amendment to their constitution recognizing digital data as something protected from unwarranted searches and seizures, and passed a landmark law protecting against the disclosure or collection of genetic information and DNA. 

SB 282 is similar in approach to  H.R.4639, a federal bill the EFF has endorsed, introduced by Senator Ron Wyden, called the Fourth Amendment is Not for Sale Act. H.R.4639 passed through the House in April 2024 but has not been taken up by the Senate. 

Absent the United States Congress being able to pass important privacy protections into law, states, cities, and towns have taken it upon themselves to pass legislation their residents sorely need in order to protect their civil liberties. Montana, with a population of just over one million people, is showing other states how it’s done. EFF applauds Montana for being the first state to close the data broker loophole and show the country that the Fourth Amendment is not for sale. 

IRS-ICE Immigrant Data Sharing Agreement Betrays Data Privacy and Taxpayers’ Trust

In an unprecedented move, the U.S. Department of Treasury and the U.S. Department of Homeland Security (DHS) recently reached an agreement allowing the IRS to share with Immigration and Customs Enforcement (ICE) taxpayer information of certain immigrants. The redacted 15-page memorandum of understanding (MOU) was exposed in a court case, Centro de Trabajadores Unidos v. Bessent, which seeks to prevent the IRS from unauthorized disclosure of taxpayer information for immigration enforcement purposes. Weaponizing government data vital to the functioning and funding of public goods and services by repurposing it for law enforcement and surveillance is an affront to a democratic society. In addition to the human rights abuses this data-sharing agreement empowers, this move threatens to erode trust in public institutions in ways that could bear consequences for decades. 

Specifically, the government justifies the MOU by citing Executive Order 14161, which was issued on January 20, 2025. The Executive Order directs the heads of several agencies, including DHS, to identify and remove individuals unlawfully present in the country. Making several leaps, the MOU states that DHS has identified “numerous” individuals who are unlawfully present and have final orders of removal, and that each of these individuals is “under criminal investigation” for violation of federal law—namely, “failure to depart” the country under 8 U.S.C. § 1253(a)(1). The MOU uses this basis for the IRS disclosing to ICE taxpayer information that is otherwise confidential under the tax code.  

In practice, this new data-sharing process works like this: ICE makes a request for an individual’s name and address, taxable periods for which the return information pertains, the federal criminal statute being investigated, and reasons why disclosure of this information is relevant to the criminal investigation. Once the IRS receives this request from ICE, the agency reviews it to determine whether it falls under an exception to the statutory authority requiring confidentiality and provides an explanation if the request cannot be processed. 

But there are two big reasons why this MOU fails to pass muster. 

First, as the NYU Tax Law Center identified:

“While the MOU references criminal investigations, DHS recently reportedly told IRS officials that ‘they would hope to use tax information to help deport as many as seven million people.’ That is far more people than the government could plausibly investigate, or who are plausibly subject to criminal immigration penalties, and suggests DHS’s actual reason for pursuing the tax data is to locate people for civil deportation, making any ‘criminal investigation’ a false pretext to get around the law.” 

Second, it’s unclear how the IRS would verify the accuracy of ICE’s requests. Recent events have demonstrated that ICE’s deportation mandate trumps all else—with ICE obfuscating, ignoring, or outright lying about how they conduct their operations and who they target. While ICE has fueled narratives about deporting “criminals” to a notorious El Salvador prison, reports have repeatedly shown that most of those deported had no criminal histories. ICE has even arrested U.S. citizens based on erroneous information and blatant racial profiling. But ICE’s lack of accuracy isn’t new—in fact, a recent settlement in the case Gonzalez v. ICE bars ICE from relying on its network of erroneous databases to issue detainer requests. In that case, EFF filed an amicus brief identifying the dizzying array of ICE’s interconnected databases, many of which were out of date and incomplete and yet were still relied upon to deprive people of their liberty. 

In the wake of the MOU’s signing, several top IRS officials have resigned. For decades, the agency expressed interest in only collecting tax revenue and promised to keep that information confidential. Undocumented immigrants were encouraged to file taxes, despite being unable to reap benefits like Social Security because of their status. Many did, often because any promise of a future pathway to legalizing their immigration status hinged on having fulfilled their tax obligations. Others did because as part of mixed-status families, they were able to claim certain tax benefits for their U.S. citizen children. The MOU weaponizes that trust and puts immigrants in an impossible situation—either fail to comply with tax law or risk facing deportation if their tax data ends up in ICE’s clutches. 

This MOU is also sure to have a financial impact. In 2023, it was estimated that undocumented immigrants contributed $66 billion in federal and payroll taxes alone. Experts anticipate that due to the data-sharing agreement, fewer undocumented immigrants will file taxes, resulting in over $313 billion in lost tax revenue over 10 years. 

This move by the federal government not only betrays taxpayers and erodes vital trust in necessary civic institutions—it also reminds us of how little we have learned from U.S. history. After all, it was a piece of legislation passed in a time of emergency, the Second War Powers Act, that included the provision that allowed once-protected census data to assist in the incarceration of Japanese Americans during World War II. As the White House wrote in a report on big data in 2014, “At its core, public-sector use of big data heightens concerns about the balance of power between government and the individual. Once information about citizens is compiled for a defined purpose, the temptation to use it for other purposes can be considerable.” Rather than heeding this caution, this data-sharing agreement seeks to exploit it. This is yet another attempt by the current administration to sweep up and disclose large amounts of sensitive and confidential data. Courts must put a stop to these efforts to destroy data privacy, especially for vulnerable groups.

Announcing EFF’s New Exhibit on Border Surveillance and Accompanying Events

EFF has created a traveling exhibit, “Border Surveillance: Places, People, and Technology,” which will make its debut at the Angel Island Immigration Station historical site this spring.

The exhibition on Angel Island in San Francisco Bay will run from April 2, 2025 through May 28, 2025. We would especially like to thank the Angel Island Immigration Station Foundation and Angel Island State Park for their collaboration. You can learn more about the exhibit’s hours of operation and how to visit it here

For the last several years, EFF has been amassing data and images detailing the massive increase in surveillance technology infrastructure at the U.S.-Mexico border. EFF staff members have made a series of trips along the U.S.-Mexico border, from the California coast to the tip of Texas, to learn from communities on both sides of the border; interview journalists, aid workers, and activists; and map and document border surveillance technology. We created the most complete open-source and publicly-available map of border surveillance infrastructure. We tracked how the border has been used as a laboratory to test new surveillance technologies. We went to court to protect the privacy of digital information for people at the border. We even released a folder of more than 65 open-licensed images of border surveillance technology so that reporters, activists, and scholars can use alternative and open sources of visual information to inform discourse.

Now, we are hoping this traveling exhibit will be a way for us to share some of that information with the public. Think of it as Border Surveillance 101. 

We could not ask for a more poignant or significant place to launch this exhibit than at the historic Angel Island Immigration Station. Between 1910 and 1940, hundreds of thousands of immigrants, primarily from Asia, hoping to enter the United States through the San Francisco Bay were detained at Angel Island. After the Chinese Exclusion Act of 1882 prevented Chinese laborers from moving to the United States, immigrants were held on Angel Island for days, months, or in some cases, even years, while they awaited permission to enter the country. Unlike New York City’s Ellis Island, which became a monument to welcoming immigrants,  Angel Island became a symbol of exclusion. The walls of the buildings where people awaited rulings on their immigration proceedings to this day,bear inscriptions and carved graffiti that show the depths of their uncertainty, alienation, fear—and hope. 

We hope that by juxtaposing the human consequences of historic exclusion with today’s high-tech, digital surveillance under which hopeful immigrants, asylum seekers, and borderlands residents live, we will invite viewers to think about what side of history they want to be on. 

If your institution—be it a museum, library, school or community center—is interested in hosting the exhibit in the future, please reach out to Senior Policy Analyst Matthew Guariglia at matthew@eff.org

Programing

In addition to the physical exhibit that you can visit on Angel Island, EFF will host two events to further explore surveillance at the U.S.-Mexico border. On April 3, 2025 from 1-2pm PDT, EFF will be joined by journalists, activists, and researchers that operate on both sides of the border, for a livestream event titled “Life and Migration Under Surveillance at the U.S.-Mexico Border.”

For people in the Bay Area, EFF will host an in-person event in San Francisco titled “Tracking and Documenting Surveillance at the U.S.-Mexico Border” on April 9th, 6-8pm hosted by the Internet Archive. Please check our events page for more information to RSVP.  

Anchorage Police Department: AI-Generated Police Reports Don’t Save Time

The Anchorage Police Department (APD) has concluded its three-month trial of Axon’s Draft One, an AI system that uses audio from body-worn cameras to write narrative police reports for officers—and has decided not to retain the technology. Axon touts this technology as “force multiplying,” claiming it cuts in half the amount of time officers usually spend writing reports—but APD disagrees.

The APD deputy chief told Alaska Public Media, “We were hoping that it would be providing significant time savings for our officers, but we did not find that to be the case.” The deputy chief flagged that the time it took officers to review reports cut into the time savings from generating the report.  The software translates the audio into narrative, and officers are expected to read through the report carefully to edit it, add details, and verify it for authenticity. Moreover, because the technology relies on audio from body-worn cameras, it often misses visual components of the story that the officer then has to add themselves. “So if they saw something but didn’t say it, of course, the body cam isn’t going to know that,” the deputy chief continued.

The Anchorage Police Department is not alone in claiming that Draft One is not a time saving device for officers. A new study into police using AI to write police reports, which specifically tested Axon’s Draft One, found that AI-assisted report-writing offered no real time-savings advantage.

This news comes on the heels of policymakers and prosecutors casting doubt on the utility or accuracy of AI-created police reports. In Utah, a pending state bill seeks to make it mandatory for departments to disclose when reports have been written by AI. In King County, Washington, the Prosecuting Attorney’s Office has directed officers not to use any AI tools to write narrative reports.

In an era where companies that sell technology to police departments profit handsomely and have marketing teams to match, it can seem like there is an endless stream of press releases and local news stories about police acquiring some new and supposedly revolutionary piece of tech. But what we don’t usually get to see is how many times departments decide that technology is costly, flawed, or lacks utility. As the future of AI-generated police reports rightly remains hotly contested, it’s important to pierce the veil of corporate propaganda and see when and if police departments actually find these costly bits of tech useless or impractical.

VICTORY! Federal Court (Finally) Rules Backdoor Searches of 702 Data Unconstitutional

Better late than never: last night a federal district court held that backdoor searches of databases full of Americans’ private communications collected under Section 702 ordinarily require a warrant. The landmark ruling comes in a criminal case, United States v. Hasbajrami, after more than a decade of litigation, and over four years since the Second Circuit Court of Appeals found that backdoor searches constitute “separate Fourth Amendment events” and directed the district court to determine a warrant was required. Now, that has been officially decreed.

In the intervening years, Congress has reauthorized Section 702 multiple times, each time ignoring overwhelming evidence that the FBI and the intelligence community abuse their access to databases of warrantlessly collected messages and other data. The Foreign Intelligence Surveillance Court (FISC), which Congress assigned with the primary role of judicial oversight of Section 702, has also repeatedly dismissed arguments that the backdoor searches violate the Fourth Amendment, giving the intelligence community endless do-overs despite its repeated transgressions of even lax safeguards on these searches.

This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702 allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.

EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.

Hasbajrami involves a U.S. resident who was arrested at New York JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only after his original conviction did the government explain that its case was premised in part on emails between Mr. Hasbajrami and an unnamed foreigner associated with terrorist groups, emails collected warrantless using Section 702 programs, placed in a database, then searched, again without a warrant, using terms related to Mr. Hasbajrami himself.

The district court found that regardless of whether the government can lawfully warrantlessly collect communications between foreigners and Americans using Section 702, it cannot ordinarily rely on a “foreign intelligence exception” to the Fourth Amendment’s warrant clause when searching these communications, as is the FBI’s routine practice. And, even if such an exception did apply, the court found that the intrusion on privacy caused by reading our most sensitive communications rendered these searches “unreasonable” under the meaning of the Fourth Amendment. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.

In light of this ruling, we ask Congress to uphold its responsibility to protect civil rights and civil liberties by refusing to renew Section 702 absent a number of necessary reforms, including an official warrant requirement for querying US persons data and increased transparency. On April 15, 2026, Section 702 is set to expire. We expect any lawmaker worthy of that title to listen to what this federal court is saying and create a legislative warrant requirement so that the intelligence community does not continue to trample on the constitutionally protected rights to private communications. More immediately, the FISC should amend its rules for backdoor searches and require the FBI to seek a warrant before conducting them.

Police Use of Face Recognition Continues to Wrack Up Real-World Harms

Police have shown, time and time again, that they cannot be trusted with face recognition technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is fully accurate or not, is too dangerous for police use,  and such use ought to be banned.

Now, The Washington Post has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the most likely match spit out by the computer without first doing their own investigation.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami, respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent match found by the computer, they in fact were not the correct match.

This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the hearing.

Apparently not all officers got the memo.

We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face recognition. They include Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number of wrongful arrests, instituting strict new guidelines about the use of face recognition technology.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in ways that violate their own departmental policies.


We continue to urge cities, states, and Congress to ban police use of face recognition technology. We stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our movement. 

AI and Policing: 2024 in Review

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

NSA Surveillance and Section 702 of FISA: 2024 in Review

Mass surveillance authority Section 702 of FISA, which allows the government to collect international communications, many of which happen to have one side in the United States, has been renewed several times since its creation with the passage of the 2008 FISA Amendments Act. This law has been an incessant threat to privacy for over a decade because the FBI operates on the “finders keepers” rule of surveillance which means that it thinks because the NSA has “incidentally” collected the US-side of conversations it is now free to sift through them without a warrant.

But 2024 became the year this mass surveillance authority was not only reauthorized by a lion’s share of both Democrats and Republicans—it was also the year the law got worse. 

After a tense fight, some temporary reauthorizations, and a looming expiration, Congress finally passed the Reforming Intelligence and Securing America Act (RISAA) in April, 20204. RISAA not only reauthorized the mass surveillance capabilities of Section 702 without any of the necessary reforms that had been floated in previous bills, it also enhanced its powers by expanding what it can be used for and who has to adhere to the government’s requests for data.

Where Section 702 was enacted under the guise of targeting people not on U.S. soil to assist with national security investigations, there are not such narrow limits on the use of communications acquired under the mass surveillance law. Following the passage of RISAA, this private information can now be used to vet immigration and asylum seekers and conduct intelligence for broadly construed “counter narcotics” purposes.

The bill also included an expanded definition of “Electronic Communications Service Provider” or ECSP. Under Section 702, anyone who oversees the storage or transmission of electronic communications—be it emails, text messages, or other online data—must cooperate with the federal government’s requests to hand over data. Under expanded definitions of ECSP there are intense and well-realized fears that anyone who hosts servers, websites, or provides internet to customers—or even just people who work in the same building as these providers—might be forced to become a tool of the surveillance state. As of December 2024, the fight is still on in Congress to clarify, narrow, and reform the definition of ECSP.

The one merciful change that occurred as a result of the 2024 smackdown over Section 702’s renewal was that it only lasts two years. That means in Spring 2026 we have to be ready to fight again to bring meaningful change, transparency, and restriction to Big Brother’s favorite law.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Police Surveillance in San Francisco: 2024 in Review

From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.

Unfortunately, 2024 was a far cry from those victories.

While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.

In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.

Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.  

EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable

The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.

Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made.

But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power.

While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security.  The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems. 

How Many U.S. Persons Does Section 702 Spy On? The ODNI Needs to Come Clean.

EFF has joined with 23 other organizations including the ACLU, Restore the Fourth, the Brennan Center for Justice, Access Now, and the Freedom of the Press Foundation to demand that the Office of the Director of National Intelligence (ODNI) furnish the public with an estimate of exactly how many U.S. persons’ communications have been hoovered up, and are now sitting on a government server for law enforcement to unconstitutionally sift through at their leisure.

This letter was motivated by the fact that representatives of the National Security Agency (NSA) have promised in the past to provide the public with an estimate of how many U.S. persons—that is, people on U.S. soil—have had their communications “incidentally” collected through the surveillance authority Section 702 of the FISA Amendments Act. 

As the letter states, “ODNI and NSA cannot expect public trust to be unconditional. If ODNI and NSA continue to renege on pledges to members of Congress, and to withhold information that lawmakers, civil society, academia, and the press have persistently sought over the course of thirteen years, that public trust will be fatally undermined.”

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large and small telecommunications service providers which hand over the digital data and communications they oversee. While Section 702 prohibits the NSA from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally” because, as it turns out, people in the United States communicate with people overseas all the time. This means that the U.S. government ends up with a massive pool consisting of the U.S.-side of conversations as well as communications from all over the globe. Domestic law enforcement agencies, including the Federal Bureau of Investigation (FBI), can then conduct backdoor warrantless searches of these “incidentally collected” communications. 

For over 10 years, EFF has fought hard every time Section 702 expires in the hope that we can get some much-needed reforms into any bills that seek to reauthorize the authority. Most recently, in spring 2024, Congress renewed Section 702 for another two years with none of the changes necessary to restore privacy rights

While we wait for the upcoming opportunity to fight Section 702, joining our allies to sign on to this letter in the fight for transparency will give us a better understanding of the scope of the problem.

You can read the whole letter here.

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

You Really Do Have Some Expectation of Privacy in Public

Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right.

How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system.

But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before.

Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.”  U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.”

This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes.

In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone.  Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so.

Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.”

As Chief Justice John Roberts said in the majority opinion:

“Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . .

As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . .  A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public.

If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy.

It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location.

The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here. 

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

Atlanta Police Must Stop High-Tech Spying on Political Movements

The Atlanta Police Department has been snooping on social media to closely monitor the meetings, protests, canvassing–even book clubs and pizza parties–of the political movement to stop “Cop City,” a police training center that would destroy part of an urban forest. Activists already believed they were likely under surveillance by the Atlanta Police Department due to evidence in criminal cases brought against them, but the extent of the monitoring has only just been revealed. The Brennan Center for Justice has obtained and released over 2,000 pages of emails from inside the Atlanta Police Department chronicling how closely they were watching the social media of the movement.

You can read all of the emails here.

Atlanta is one of the most heavily surveilled cities in the United States.

The emails reveal monitoring that went far beyond when the department felt that laws might have been broken. Instead, they tracked every event even tangentially related to the movement–not just protests but pizza nights, canvassing for petition signatures, and reading groups. This threatens people’s ability to exercise their first-amendment protected right to protest and affiliate with various groups and political movements. The police overreach in Atlanta will deter people from practicing their politics in a way that is supposed to be protected in the United States.

To understand the many lines crossed by the Atlanta Police Department’s high-tech spying, it’s helpful to look back at the efforts to end political spying in New York City. In 1985, the pivotal legal case Handschu v. Special Services Division yielded important limits, which have been strengthened in several subsequent court decisions. The case demonstrated the illegality of police spying on people because of their religious or political beliefs. Indeed, people nationwide should have similar protections of their rights to protest, organize, and speak publicly without fear of invasive surveillance and harassment. The Atlanta Police Department’s use of social media to spy on protesters today echoes NYPD’s use of film to spy on protesters going back decades. In 2019, the New York City municipal archives digitized 140 hours of NYPD surveillance footage of protests and political activity from the 1950s through the 1970s. This footage shows the type of organizing and protesting the APD is so eager to monitor now in Atlanta.

Atlanta is one of the most heavily surveilled cities in the United States. According to EFF’s Atlas of Surveillance, law enforcement in Atlanta, supported financially by the Atlanta Police Foundation, have contracts to use nearly every type of surveillance technology we track. This is a dangerous combination. Worse, Atlanta lacks laws like CCOPS or a Face Recognition Ban to rein in police tech. Thanks to the Brennan Center, we also have strong proof of widespread social media monitoring of political activity. This is exactly why the city is so ripe for legislation to impose democratic limits on whether police can use its ever-mounting pile of invasive technology, and to place privacy limits on such use.

Until that time comes, make sure you’re up to speed on EFF’s Surveillance Self Defense Guide for attending a protest. And, if you’re on the go, bring this printable pocket version with you. 

The SFPD’s Intended Purchase of a Robot Dog Triggers Board of Supervisors’ Oversight Obligations

The San Francisco Police Department (SFPD) wants to get a robot quadruped, popularly known as a robot dog. The city’s Board of Supervisors has a regulatory duty to probe into this intended purchase, including potentially blocking it altogether.

The SFPD recently proposed the acquisition of a new robot dog in a report about the department’s existing military arsenal and its proposed future expansion. The particular model that SFPD claims they are exploring, Boston Dynamics’s Spot, is capable of intrusion and surveillance in a manner similar to drones and other unmanned vehicles and is able to hold “payloads” like cameras.

The SFPD’s disclosure came about as a result of a California law, A.B. 481, which requires police departments to make publicly available information about “military equipment,” including weapons and surveillance tools such as drones, firearms, tanks, and robots. Some of this equipment may come through the federal government’s military surplus program.

A.B. 481 also requires a law enforcement agency to seek approval from its local governing body when acquiring, using, or seeking funds for military equipment and submit a military equipment policy. That policy must be made publicly available and must be approved by the governing body of the jurisdiction on a yearly basis. As part of that approval process, the governing body must determine that the policy meets the following criteria:

  • The military equipment is necessary because there is no reasonable alternative that can achieve the same objective of officer and civilian safety
  • The proposed military equipment use policy will safeguard the public’s welfare, safety, civil rights, and civil liberties
  • If purchasing the equipment, the equipment is reasonably cost effective compared to available alternatives that can achieve the same objective of officer and civilian safety
  • Prior military equipment use complied with the military equipment use policy that was in effect at the time, or if prior uses did not comply with the accompanying military equipment use policy, corrective action has been taken to remedy nonconforming uses and ensure future compliance

Based on the oversight requirements imposed by A.B. 481, the San Francisco Board of Supervisors must ask the SFPD some important questions before deciding if the police department actually needs a robot dog: How will the SFPD use this surveillance equipment? Given that the robot dog does not have the utility of one of the department’s bomb disposal robots, why would this robot be useful? What can this robot do that other devices it already has at its disposal cannot do? Does the potential limited use of this device justify its expenditure? How does the SFPD intend to safeguard civil rights and civil liberties in deploying this robot into communities that may already be overpoliced?

If the SFPD cannot make a compelling case for the purchase of a robot quadruped, the Board of Supervisors has a responsibility to block the sale.

A.B. 481 serves as an important tool for democratic control of police’s acquisition of surveillance technology despite recent local efforts to undermine such oversight. In 2019, San Francisco passed a Community Control of Police Surveillance (CCOPS) ordinance, which required city departments like the SFPD to seek Board approval before acquiring or using new surveillance technologies, in a transparent process that offered the opportunity for public comment. This past March, voters scaled back this law by enacting Proposition E, which allows the SFPD a one-year “experimentation” period to test out new surveillance technologies without a use policy or Board approval. However, the state statute still governs military equipment, such as the proposed robot dog, which continues to need Board approval before purchasing and still requires a publicly available policy that takes into consideration the uses of the equipment and the civil liberties impacts on the public.

In 2022, the San Francisco Board of Supervisors banned police deployment of deadly force via remote control robot, so at least we know this robot dog will not be used in that way. It should also be noted that Boston Dynamics has vowed not to arm their robots. But just because this robot dog doesn’t have a bomb strapped to it, doesn’t mean it will prove innocuous to the public, useful to police, or at all helpful to the city. The Board of Supervisors has an opportunity and a responsibility to ensure that any procurement of robots comes with a strong justification from the SFPD, clear policy around how it can be used, and consideration of the impacts on civil rights and civil liberties. Just because narratives about rising crime have gained a foothold does not mean that elected officials get to abdicate any sense of reason or practicality in what technology they allow police departments to buy and use. When it comes to military equipment, the state of California has given cities an oversight tooland San Francisco should use it. 

Police are Using Drones More and Spending More For Them

Police in Minnesota are buying and flying more drones than ever before, according to an annual report recently released by the state’s Bureau of Criminal Apprehension (BCA). Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. This marks a large, 41 percent increase from 2022, when departments across the state used drones 3,076 times and spent $646,531.24 on using them. The data show that more was spent on drones last year than in the previous two years combined. Minneapolis Police Department, the state’s largest police department, implemented a new drone program at the end of 2022 and reported that its 63 warrantless flights in 2023 cost nearly $100,000.

Since 2020, the state of Minnesota has been obligated to put out a yearly report documenting every time and reason law enforcement agencies in the state — local, county, or state-wide — used unmanned aerial vehicles (UAVs), more commonly known as drones, without a warrant. This is partly because Minnesota law requires a warrant for law enforcement to use drones except for specific situations listed in the statute. The State Court Administrator is also required to provide a public report of the number of warrants issued for the use of UAVs, and the data gathered by them. These regular reports give us a glimpse into how police are actually using these devices and how often. As more and more police departments around the country use drones or experiment with drones as first responders, it offers an example of how transparency around drone adoption can be done.

You can read our blog about the 2021 Minnesota report here.

According to EFF’s Atlas of Surveillance, 130 of Minnesota’s 408 law enforcement agencies have drones. Of the Minnesota agencies known to have drones prior to this month’s report, 29 of them did not provide the BCA with 2023 use and cost data.

One of the more revealing aspects of drone deployment provided by  the report is the purpose for which police are using them. A vast majority of uses, almost three-quarters of every time police in Minnesota used drones, were either related to obtaining an aerial view of incidents involving injuries  or death, like car accidents, or for police training and public relations purposes.

Are drones really just a 1 million dollar training tool? We’ve argued many times that tools deployed by police for very specific purposes often find punitive uses that far outreach their original, possibly more innocuous intention. In the case of Minnesota’s drone usage, that can be seen in the other exceptions to the warrant requirement, such as surveilling a public event where there’s a “heightened risk” for participant security. The warrant requirement is meant to prevent using aerial surveillance in violation of civil liberties, but these exceptions open the door to surveillance of First Amendment-protected gatherings and demonstrations. 

California’s Facial Recognition Bill Is Not the Solution We Need

California Assemblymember Phil Ting has introduced A.B. 1814, a bill that would supposedly regulate police use of facial recognition technology. The problem is that it would do little to actually change the status quo of how police use this invasive and problematic technology. Police use of facial recognition poses a massive risk to civil liberties, privacy, and even our physical health as the technology has been known to wrongfully sic armed police on innocent peopleparticularly Black men and women. That’s why this issue is too important to throw inadequate or half-measures like A.B. 1814 to try to fix it.

The bill dictates that police should examine facial recognition matches “with care” and that a match should not be the sole basis for the probable cause for an arrest or search warrant. And while we agree it is a big issue that police seem to repeatedly use the matches spit out by a computer as the only justification for arresting people, theoretically the limit this bill imposes is already the limit. Police departments and facial recognition companies alike both maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? It only gives the appearance of doing something to address face recognition technology's harms, while inadvertently allowing the practice to continue.

Additionally, A.B. 1814 gives defendants no real recourse against police who violate its requirements. There is neither a suppression remedy nor a usable private cause of action. The bill lacks transparency requirements which would compel police departments to reveal if they used face recognition in the first place. This means if police did arrest someone wrongfully because a computer said they looked similar to the subject, someone would likely not even know they could sue the department over damages, unless they uncovered it while being prosecuted. 

Despite these attempts at creating leaky bureaucratic reforms, police may continue to use this technology to identify people at protests, track marginalized individuals when they visit doctors or have other personal encounters, as well as any other number of civil liberties-chilling uses police might overtly or inadvertently deploy. It is this reason that EFF continues to advocate for a complete ban on government use of face recognition–an approach that has also resulted in cities across the United States standing up for themselves and enacting bans. Until the day comes that California lawmakers realize the urgent need to ban government use of face recognition, we will continue to differentiate between bills that will make a serious difference in the lives of the surveilled, and those that do not. That is why we are urging Assemblymembers to vote no on A.B. 1814. 

❌