Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

How Many U.S. Persons Does Section 702 Spy On? The ODNI Needs to Come Clean.

EFF has joined with 23 other organizations including the ACLU, Restore the Fourth, the Brennan Center for Justice, Access Now, and the Freedom of the Press Foundation to demand that the Office of the Director of National Intelligence (ODNI) furnish the public with an estimate of exactly how many U.S. persons’ communications have been hoovered up, and are now sitting on a government server for law enforcement to unconstitutionally sift through at their leisure.

This letter was motivated by the fact that representatives of the National Security Agency (NSA) have promised in the past to provide the public with an estimate of how many U.S. persons—that is, people on U.S. soil—have had their communications “incidentally” collected through the surveillance authority Section 702 of the FISA Amendments Act. 

As the letter states, “ODNI and NSA cannot expect public trust to be unconditional. If ODNI and NSA continue to renege on pledges to members of Congress, and to withhold information that lawmakers, civil society, academia, and the press have persistently sought over the course of thirteen years, that public trust will be fatally undermined.”

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large and small telecommunications service providers which hand over the digital data and communications they oversee. While Section 702 prohibits the NSA from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally” because, as it turns out, people in the United States communicate with people overseas all the time. This means that the U.S. government ends up with a massive pool consisting of the U.S.-side of conversations as well as communications from all over the globe. Domestic law enforcement agencies, including the Federal Bureau of Investigation (FBI), can then conduct backdoor warrantless searches of these “incidentally collected” communications. 

For over 10 years, EFF has fought hard every time Section 702 expires in the hope that we can get some much-needed reforms into any bills that seek to reauthorize the authority. Most recently, in spring 2024, Congress renewed Section 702 for another two years with none of the changes necessary to restore privacy rights

While we wait for the upcoming opportunity to fight Section 702, joining our allies to sign on to this letter in the fight for transparency will give us a better understanding of the scope of the problem.

You can read the whole letter here.

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

You Really Do Have Some Expectation of Privacy in Public

Being out in the world advocating for privacy often means having to face a chorus of naysayers and nihilists. When we spend time fighting the expansion of Automated License Plate Readers capable of tracking cars as they move, or the growing ubiquity of both public and private surveillance cameras, we often hear a familiar refrain: “you don’t have an expectation of privacy in public.” This is not true. In the United States, you do have some expectation of privacy—even in public—and it’s important to stand up and protect that right.

How is it possible to have an expectation of privacy in public? The answer lies in the rise of increasingly advanced surveillance technology. When you are out in the world, of course you are going to be seen, so your presence will be recorded in one way or another. There’s nothing stopping a person from observing you if they’re standing across the street. If law enforcement has decided to investigate you, they can physically follow you. If you go to the bank or visit a courthouse, it’s reasonable to assume you’ll end up on their individual video security system.

But our ever-growing network of sophisticated surveillance technology has fundamentally transformed what it means to be observed in public. Today’s technology can effortlessly track your location over time, collect sensitive, intimate information about you, and keep a retrospective record of this data that may be stored for months, years, or indefinitely. This data can be collected for any purpose, or even for none at all. And taken in the aggregate, this data can paint a detailed picture of your daily life—a picture that is more cheaply and easily accessed by the government than ever before.

Because of this, we’re at risk of exposing more information about ourselves in public than we were in decades past. This, in turn, affects how we think about privacy in public. While your expectation of privacy is certainly different in public than it would be in your private home, there is no legal rule that says you lose all expectation of privacy whenever you’re in a public place. To the contrary, the U.S. Supreme Court has emphasized since the 1960’s that “what [one] seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” The Fourth Amendment protects “people, not places.”  U.S. privacy law instead typically asks whether your expectation of privacy is something society considers “reasonable.”

This is where mass surveillance comes in. While it is unreasonable to assume that everything you do in public will be kept private from prying eyes, there is a real expectation that when you travel throughout town over the course of a day—running errands, seeing a doctor, going to or from work, attending a protest—that the entirety of your movements is not being precisely tracked, stored by a single entity, and freely shared with the government. In other words, you have a reasonable expectation of privacy in at least some of the uniquely sensitive and revealing information collected by surveillance technology, although courts and legislatures are still working out the precise contours of what that includes.

In 2018, the U.S. Supreme Court decided a landmark case on this subject, Carpenter v. United States. In Carpenter, the court recognized that you have a reasonable expectation of privacy in the whole of your physical movements, including your movements in public. It therefore held that the defendant had an expectation of privacy in 127 days worth of accumulated historical cell site location information (CSLI). The records that make up CSLI data can provide a comprehensive chronicle of your movements over an extended period of time by using the cell site location information from your phone.  Accessing this information intrudes on your private sphere, and the Fourth Amendment ordinarily requires the government to obtain a warrant in order to do so.

Importantly, you retain this expectation of privacy even when those records are collected while you’re in public. In coming to its holding, the Carpenter court wished to preserve “the degree of privacy against government that existed when the Fourth Amendment was adopted.” Historically, we have not expected the government to secretly catalogue and monitor all of our movements over time, even when we travel in public. Allowing the government to access cell site location information contravenes that expectation. The court stressed that these accumulated records reveal not only a person’s particular public movements, but also their “familial, political, professional, religious, and sexual associations.”

As Chief Justice John Roberts said in the majority opinion:

“Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection. Whether the Government employs its own surveillance technology . . . or leverages the technology of a wireless carrier, we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through [cell phone site data]. The location information obtained from Carpenter’s wireless carriers was the product of a search. . . .

As with GPS information, the time-stamped data provides an intimate window into a person’s life, revealing not only his particular movements, but through them his “familial, political, professional, religious, and sexual associations.” These location records “hold for many Americans the ‘privacies of life.’” . . .  A cell phone faithfully follows its owner beyond public thoroughfares and into private residences, doctor’s offices, political headquarters, and other potentially revealing locales. Accordingly, when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.”

As often happens in the wake of a landmark Supreme Court decision, there has been some confusion among lower courts in trying to determine what other types of data and technology violate our expectation of privacy when we’re in public. There are admittedly still several open questions: How comprehensive must the surveillance be? How long of a time period must it cover? Do we only care about backward-looking, retrospective tracking? Still, one overall principle remains certain: you do have some expectation of privacy in public.

If law enforcement or the government wants to know where you’ve been all day long over an extended period of time, that combined information is considered revealing and sensitive enough that police need a warrant for it. We strongly believe the same principle also applies to other forms of surveillance technology, such as automated license plate reader camera networks that capture your car’s movements over time. As more and more integrated surveillance technologies become the norm, we expect courts will expand existing legal decisions to protect this expectation of privacy.

It's crucial that we do not simply give up on this right. Your location over time, even if you are traversing public roads and public sidewalks, is revealing. More revealing than many people realize. If you drive from a specific person’s house to a protest, and then back to that house afterward—what can police infer from having those sensitive and chronologically expansive records of your movement? What could people insinuate about you if you went to a doctor’s appointment at a reproductive healthcare clinic and then drove to a pharmacy three towns away from where you live? Scenarios like this involve people driving on public roads or being seen in public, but we also have to take time into consideration. Tracking someone’s movements all day is not nearly the same thing as seeing their car drive past a single camera at one time and location.

The courts may still be catching up with the law and technology, but that doesn’t mean it’s a surveillance free-for-all just because you’re in the public. The government still has important restrictions against tracking our movement over time and in public even if you find yourself out in the world walking past individual security cameras. This is why we do what we do, because despite the naysayers, someone has to continue to hold the line and educate the world on how privacy isn’t dead.

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here. 

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

Atlanta Police Must Stop High-Tech Spying on Political Movements

The Atlanta Police Department has been snooping on social media to closely monitor the meetings, protests, canvassing–even book clubs and pizza parties–of the political movement to stop “Cop City,” a police training center that would destroy part of an urban forest. Activists already believed they were likely under surveillance by the Atlanta Police Department due to evidence in criminal cases brought against them, but the extent of the monitoring has only just been revealed. The Brennan Center for Justice has obtained and released over 2,000 pages of emails from inside the Atlanta Police Department chronicling how closely they were watching the social media of the movement.

You can read all of the emails here.

Atlanta is one of the most heavily surveilled cities in the United States.

The emails reveal monitoring that went far beyond when the department felt that laws might have been broken. Instead, they tracked every event even tangentially related to the movement–not just protests but pizza nights, canvassing for petition signatures, and reading groups. This threatens people’s ability to exercise their first-amendment protected right to protest and affiliate with various groups and political movements. The police overreach in Atlanta will deter people from practicing their politics in a way that is supposed to be protected in the United States.

To understand the many lines crossed by the Atlanta Police Department’s high-tech spying, it’s helpful to look back at the efforts to end political spying in New York City. In 1985, the pivotal legal case Handschu v. Special Services Division yielded important limits, which have been strengthened in several subsequent court decisions. The case demonstrated the illegality of police spying on people because of their religious or political beliefs. Indeed, people nationwide should have similar protections of their rights to protest, organize, and speak publicly without fear of invasive surveillance and harassment. The Atlanta Police Department’s use of social media to spy on protesters today echoes NYPD’s use of film to spy on protesters going back decades. In 2019, the New York City municipal archives digitized 140 hours of NYPD surveillance footage of protests and political activity from the 1950s through the 1970s. This footage shows the type of organizing and protesting the APD is so eager to monitor now in Atlanta.

Atlanta is one of the most heavily surveilled cities in the United States. According to EFF’s Atlas of Surveillance, law enforcement in Atlanta, supported financially by the Atlanta Police Foundation, have contracts to use nearly every type of surveillance technology we track. This is a dangerous combination. Worse, Atlanta lacks laws like CCOPS or a Face Recognition Ban to rein in police tech. Thanks to the Brennan Center, we also have strong proof of widespread social media monitoring of political activity. This is exactly why the city is so ripe for legislation to impose democratic limits on whether police can use its ever-mounting pile of invasive technology, and to place privacy limits on such use.

Until that time comes, make sure you’re up to speed on EFF’s Surveillance Self Defense Guide for attending a protest. And, if you’re on the go, bring this printable pocket version with you. 

The SFPD’s Intended Purchase of a Robot Dog Triggers Board of Supervisors’ Oversight Obligations

The San Francisco Police Department (SFPD) wants to get a robot quadruped, popularly known as a robot dog. The city’s Board of Supervisors has a regulatory duty to probe into this intended purchase, including potentially blocking it altogether.

The SFPD recently proposed the acquisition of a new robot dog in a report about the department’s existing military arsenal and its proposed future expansion. The particular model that SFPD claims they are exploring, Boston Dynamics’s Spot, is capable of intrusion and surveillance in a manner similar to drones and other unmanned vehicles and is able to hold “payloads” like cameras.

The SFPD’s disclosure came about as a result of a California law, A.B. 481, which requires police departments to make publicly available information about “military equipment,” including weapons and surveillance tools such as drones, firearms, tanks, and robots. Some of this equipment may come through the federal government’s military surplus program.

A.B. 481 also requires a law enforcement agency to seek approval from its local governing body when acquiring, using, or seeking funds for military equipment and submit a military equipment policy. That policy must be made publicly available and must be approved by the governing body of the jurisdiction on a yearly basis. As part of that approval process, the governing body must determine that the policy meets the following criteria:

  • The military equipment is necessary because there is no reasonable alternative that can achieve the same objective of officer and civilian safety
  • The proposed military equipment use policy will safeguard the public’s welfare, safety, civil rights, and civil liberties
  • If purchasing the equipment, the equipment is reasonably cost effective compared to available alternatives that can achieve the same objective of officer and civilian safety
  • Prior military equipment use complied with the military equipment use policy that was in effect at the time, or if prior uses did not comply with the accompanying military equipment use policy, corrective action has been taken to remedy nonconforming uses and ensure future compliance

Based on the oversight requirements imposed by A.B. 481, the San Francisco Board of Supervisors must ask the SFPD some important questions before deciding if the police department actually needs a robot dog: How will the SFPD use this surveillance equipment? Given that the robot dog does not have the utility of one of the department’s bomb disposal robots, why would this robot be useful? What can this robot do that other devices it already has at its disposal cannot do? Does the potential limited use of this device justify its expenditure? How does the SFPD intend to safeguard civil rights and civil liberties in deploying this robot into communities that may already be overpoliced?

If the SFPD cannot make a compelling case for the purchase of a robot quadruped, the Board of Supervisors has a responsibility to block the sale.

A.B. 481 serves as an important tool for democratic control of police’s acquisition of surveillance technology despite recent local efforts to undermine such oversight. In 2019, San Francisco passed a Community Control of Police Surveillance (CCOPS) ordinance, which required city departments like the SFPD to seek Board approval before acquiring or using new surveillance technologies, in a transparent process that offered the opportunity for public comment. This past March, voters scaled back this law by enacting Proposition E, which allows the SFPD a one-year “experimentation” period to test out new surveillance technologies without a use policy or Board approval. However, the state statute still governs military equipment, such as the proposed robot dog, which continues to need Board approval before purchasing and still requires a publicly available policy that takes into consideration the uses of the equipment and the civil liberties impacts on the public.

In 2022, the San Francisco Board of Supervisors banned police deployment of deadly force via remote control robot, so at least we know this robot dog will not be used in that way. It should also be noted that Boston Dynamics has vowed not to arm their robots. But just because this robot dog doesn’t have a bomb strapped to it, doesn’t mean it will prove innocuous to the public, useful to police, or at all helpful to the city. The Board of Supervisors has an opportunity and a responsibility to ensure that any procurement of robots comes with a strong justification from the SFPD, clear policy around how it can be used, and consideration of the impacts on civil rights and civil liberties. Just because narratives about rising crime have gained a foothold does not mean that elected officials get to abdicate any sense of reason or practicality in what technology they allow police departments to buy and use. When it comes to military equipment, the state of California has given cities an oversight tooland San Francisco should use it. 

Police are Using Drones More and Spending More For Them

Police in Minnesota are buying and flying more drones than ever before, according to an annual report recently released by the state’s Bureau of Criminal Apprehension (BCA). Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. This marks a large, 41 percent increase from 2022, when departments across the state used drones 3,076 times and spent $646,531.24 on using them. The data show that more was spent on drones last year than in the previous two years combined. Minneapolis Police Department, the state’s largest police department, implemented a new drone program at the end of 2022 and reported that its 63 warrantless flights in 2023 cost nearly $100,000.

Since 2020, the state of Minnesota has been obligated to put out a yearly report documenting every time and reason law enforcement agencies in the state — local, county, or state-wide — used unmanned aerial vehicles (UAVs), more commonly known as drones, without a warrant. This is partly because Minnesota law requires a warrant for law enforcement to use drones except for specific situations listed in the statute. The State Court Administrator is also required to provide a public report of the number of warrants issued for the use of UAVs, and the data gathered by them. These regular reports give us a glimpse into how police are actually using these devices and how often. As more and more police departments around the country use drones or experiment with drones as first responders, it offers an example of how transparency around drone adoption can be done.

You can read our blog about the 2021 Minnesota report here.

According to EFF’s Atlas of Surveillance, 130 of Minnesota’s 408 law enforcement agencies have drones. Of the Minnesota agencies known to have drones prior to this month’s report, 29 of them did not provide the BCA with 2023 use and cost data.

One of the more revealing aspects of drone deployment provided by  the report is the purpose for which police are using them. A vast majority of uses, almost three-quarters of every time police in Minnesota used drones, were either related to obtaining an aerial view of incidents involving injuries  or death, like car accidents, or for police training and public relations purposes.

Are drones really just a 1 million dollar training tool? We’ve argued many times that tools deployed by police for very specific purposes often find punitive uses that far outreach their original, possibly more innocuous intention. In the case of Minnesota’s drone usage, that can be seen in the other exceptions to the warrant requirement, such as surveilling a public event where there’s a “heightened risk” for participant security. The warrant requirement is meant to prevent using aerial surveillance in violation of civil liberties, but these exceptions open the door to surveillance of First Amendment-protected gatherings and demonstrations. 

California’s Facial Recognition Bill Is Not the Solution We Need

California Assemblymember Phil Ting has introduced A.B. 1814, a bill that would supposedly regulate police use of facial recognition technology. The problem is that it would do little to actually change the status quo of how police use this invasive and problematic technology. Police use of facial recognition poses a massive risk to civil liberties, privacy, and even our physical health as the technology has been known to wrongfully sic armed police on innocent peopleparticularly Black men and women. That’s why this issue is too important to throw inadequate or half-measures like A.B. 1814 to try to fix it.

The bill dictates that police should examine facial recognition matches “with care” and that a match should not be the sole basis for the probable cause for an arrest or search warrant. And while we agree it is a big issue that police seem to repeatedly use the matches spit out by a computer as the only justification for arresting people, theoretically the limit this bill imposes is already the limit. Police departments and facial recognition companies alike both maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? It only gives the appearance of doing something to address face recognition technology's harms, while inadvertently allowing the practice to continue.

Additionally, A.B. 1814 gives defendants no real recourse against police who violate its requirements. There is neither a suppression remedy nor a usable private cause of action. The bill lacks transparency requirements which would compel police departments to reveal if they used face recognition in the first place. This means if police did arrest someone wrongfully because a computer said they looked similar to the subject, someone would likely not even know they could sue the department over damages, unless they uncovered it while being prosecuted. 

Despite these attempts at creating leaky bureaucratic reforms, police may continue to use this technology to identify people at protests, track marginalized individuals when they visit doctors or have other personal encounters, as well as any other number of civil liberties-chilling uses police might overtly or inadvertently deploy. It is this reason that EFF continues to advocate for a complete ban on government use of face recognition–an approach that has also resulted in cities across the United States standing up for themselves and enacting bans. Until the day comes that California lawmakers realize the urgent need to ban government use of face recognition, we will continue to differentiate between bills that will make a serious difference in the lives of the surveilled, and those that do not. That is why we are urging Assemblymembers to vote no on A.B. 1814. 

Security, Surveillance, and Government Overreach – the United States Set the Path but Canada Shouldn’t Follow It

The Canadian House of Commons is currently considering Bill C-26, which would make sweeping amendments to the country’s Telecommunications Act that would expand its Minister of Industry’s power over telecommunication service providers. It’s designed to accomplish a laudable and challenging goal: ensure that government and industry partners efficiently and effectively work together to strengthen Canada’s network security in the face of repeated hacking attacks.

C-26 is not identical to US national security laws. But without adequate safeguards, it could open the door to similar practices and orders.

As researchers and civil society organizations have noted, however, the legislation contains vague and overbroad language that may invite abuse and pressure on ISPs to do the government’s bidding at the expense of Canadian privacy rights. It would vest substantial authority in Canadian executive branch officials to (in the words of C-26’s summary) “direct telecommunications service providers to do anything, or refrain from doing anything, that is necessary to secure the Canadian telecommunications system.” That could include ordering telecommunications companies to install backdoors inside encrypted elements in Canada’s networksSafeguards to protect privacy and civil rights are few; C-26’s only express limit is that Canadian officials cannot order service providers to intercept private or radio-based telephone communications.

Unfortunately, we in the United States know all too well what can happen when government officials assert broad discretionary power over telecommunications networks. For over 20 years, the U.S. government has deputized internet service providers and systems to surveil Americans and their correspondents, without meaningful judicial oversight. These legal authorities and details of the surveillance have varied, but, in essence, national security law has allowed the U.S. government to vacuum up digital communications so long as the surveillance is directed at foreigners currently located outside the United States and doesn’t intentionally target Americans. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches to find Americans’ communications.

Congress has attempted to add in additional safeguards over the years, to little avail. In 2023, for example, the Federal Bureau of Investigation (FBI) released internal documents used to guide agency personnel on how to search the massive databases of information they collect. Despite reassurances from the intelligence community about its “culture of compliance,” these documents reflect little interest in protecting privacy or civil liberties. At the same time, the NSA and domestic law enforcement authorities have been seeking to undermine the encryption tools and processes on which we all rely to protect our privacy and security.

C-26 is not identical to U.S. national security laws. But without adequate safeguards, it could open the door to similar practices and orders. What is worse, some of those orders could be secret, at the government’s discretion. In the U.S., that kind of secrecy has made it impossible for Americans to challenge mass surveillance in court. We’ve also seen companies presented with gag orders in connection with “national security letters” compelling them to hand over information. C-26 does allow for judicial review of non-secret orders, e.g. an order requiring an ISP to cut off an account-holder or website, if the subject of those orders believes they are unreasonable or ungrounded. But that review may include secret evidence that is kept from applicants and their counsel.

Canadian courts will decide whether a law authorizing secret orders and evidence is consistent with Canada’s legal tradition. But either way, the U.S. experience offers a cautionary tale of what can happen when a government grants itself broad powers to monitor and direct telecommunications networks, absent corresponding protections for human rights. In effect, the U.S. government has created, in the name of national security, a broad exception to the Constitution that allows the government to spy on all Americans and denies them any viable means of challenging that spying. We hope Canadians will refuse to allow their government to do the same in the name of “cybersecurity.”

One (Busy) Day in the Life of EFF’s Activism Team

EFF is an organization of lawyers, technologists, policy professionals, and importantly–full-time activists–who fight to make sure that technology enhances rather than threatens civil liberties on a global scale. EFF’s activism team includes experienced issue experts, master communicators, and grassroots organizers who help to coordinate and orchestrate EFF’s activist campaigns that include but go well beyond litigation, technical analyses and solutions, and direct lobbying to legislators.

If you’ve ever wondered what it would be like to work on the activism team at EFF, or if you are curious about applying for a job at EFF, take a look at one exceptional (but also fairly ordinary) day in the life of five members of the team:

Jillian York, Director For International Freedom of Expression

I wake up around 9:00, make coffee, and check my email and internal messages (we use Mattermost, a self-hosted chat tool). I live in Berlin—between four and nine hours ahead of most of my colleagues—which on most days enables me to get some “deep work” done before anyone else is online.

I see that one of my colleagues in San Francisco left a late-night message asking for someone to edit a short blog post. No one else is awake yet, so I jump on it. I then work on a piece of writing of my own, documenting the case of Alaa Abd El Fattah, an Egyptian technologist, blogger, and EFF supporter who’s been imprisoned on and off for the past decade. After that, I respond to some emails and messages from colleagues from the day prior.

EFF offers us flexible hours, and since I’m in Europe I often have to take calls in the evening (6 or 7 pm my time is 9 or 10 am San Francisco time, when a lot of team meetings take place). I see this as an advantage, as it allows me to meet a friend for lunch and hit the gym before heading back to work. 

There’s a dangerous new bill being proposed in a country where we don’t have so much expertise, but which looks likely to have a greater impact across the region, so a colleague and I hop on a call with a local digital rights group to plan a strategy. When we work internationally, we always consult or partner with local groups to make sure that we’re working toward the best outcome for the local population.

While I’m on the call, my Signal messages start blowing up. A lot of the partners we work with in another region of the world prefer to organize there for reasons of safety, and there’s been a cyberattack on a local media publication. Our partners are looking for some assistance in dealing with it, so I send some messages to colleagues (both at EFF and other friendly organizations) to get them the right help.

After handling some administrative tasks, it’s time for the meeting of the international working group. In that group, we discuss threats facing people outside the U.S., often in areas that are underrepresented by both U.S. and global media.

After that meeting, it's off to prep for a talk I'll be giving at an upcoming conference. There have been improvements in social media takedown transparency reporting, but there are a lot of ways to continue that progress, and a former colleague and I will be hosting a mock game show about the heroes and anti-heroes of transparency. By the time I finish that, it's nearly 11 pm my time, so it's off to bed for me, but not for everyone else!

Matthew Guariglia, Senior Policy Analyst Responsible for Government Surveillance Advocacy

My morning can sometimes start surprisingly early. This morning, a reporter I often speak to called to if I had any comments about a major change to how Amazon Ring security cameras will allow police to request access to user’s footage. I quickly try to make sense of the new changes—Amazon’s press release doesn’t say nearly enough.  Giving a statement to the press requires a brief huddle between me, EFF’s press director, and other lawyers, technologists, and activists who have worked on our Ring campaign over the last few years. Soon, we have a statement that conveys exactly what we think Amazon needs to do differently, and what users and non-users should know about this change and its impact on their rights.. About an hour after that, we turn our brief statement into a longer blog post for everyone to read. 

For the rest of the day now, in between other obligations and meetings, I take press calls or do TV interviews from curious reporters asking whether this change in policy is a win for privacy. My first meeting is with representatives of about a dozen mostly-local groups in the Bay Area, where EFF is located, about the next steps for opposing Proposition E, a ballot measure that greatly reduces the amount of oversight on the San Francisco Police Department concerning what technology they use. I send a few requests to our design team about printing window signs and then talk with our Activism Director about making plans to potentially fly a plane over the city. Shortly after that, I’m in a coalition meeting of national civil liberties organizations discussing ways of keeping a clean reauthorization of Section 702 (a mass surveillance authority that expires this year) out of a must-pass bill that would continue to fund the government. 

In the afternoon, I watch and take notes as a Congressional committee holds a hearing about AI use in law enforcement. Keeping an eye on this allows me to see what arguments and talking points law enforcement is using, which members of Congress seem critical of AI use in policing and might be worth getting in touch with, and whether there are any revelations in the hearing that we should communicate to our members and readers. 

After the hearing, I have to briefly send notes to a Senator and their staff on a draft of a public letter they intend to send to industry leaders about data collection—and when law enforcement may or may not request access to stored user data. 

Tomorrow,  I’ll follow up on many of the plans made over the course of this day: I’ll need to send out a mass email to EFF supporters in the Bay Area rallying them to join in the fight against Proposition E, and review new federal legislation to see if it offers enough reform of Section 702 that EFF might consider supporting it. 

Hayley Tsukayama, Associate Director of Legislative Activism

I settle in with a big mug of tea to start a day full of online meetings. This probably sounds boring to a lot of people, but I know I'll have a ton of interesting conversations today.

Much of my job coordinating our state legislative work requires speaking with like-minded organizations across the country. EFF tries, but we can't be everywhere we want to be all of the time. So, for example, we host a regular call with groups pushing for stronger state consumer data privacy laws. This call gives us a place to share information about a dozen or more privacy bills in as many states. Some groups on the call focus on one state; others, like EFF, work in multiple states. Our groups may not agree on every bill, but we're all working toward a world where companies must respect our privacy by default.

You know, just a small goal.

Today, we get a summary of a hearing that a friendly lawmaker organized to give politicians from several states a forum to explain how big tech companies, advertisers, and data brokers have stymied strong privacy legislation. This is one reason we compare notes: the more we know about what they're doing, the better we can fight them—even though the other side has more money and staff for state legislative work than all of us combined.

From there, I jump to a call on emerging AI legislation in states. Many companies pushing weak AI regulation make software that monitors employees, so this work has connected me to a universe of labor advocates I've never gotten to work with before. I've learned so much from them, both about how AI affects working conditions and about the ways they organize and mobilize people. Working in coalitions shows me how different people bring their strengths to a broader movement.

At EFF, our activists know: we win with words. I make a note to myself to start drafting a blog post on some bad copy-paste AI bills showing up across the country, which companies have carefully written to exempt their own products.

My position lets me stick my nose into almost every EFF issue, which is one thing I love about it. For the rest of the day, I meet with a group of right-to-repair advocates whose decades of advocacy have racked up incredible wins in the past couple of years. I update a position letter to the California legislature about automotive data. I send a draft action to one of our lawyers—who I get to work with every day— about a great Massachusetts bill that would prohibit the sale of location data without permission. I debrief with two EFF staffers who testified this week in Sacramento on two California bills—one on IP issues, another on police surveillance. I polish a speech I'm giving with one of my colleagues, who has kindly made time to help me. I prep for a call with young activists who want to discuss a bill idea.

There is no "typical" day in my job. The one constant is that I get to work with passionate people, at EFF and outside of it, who want to make the world a better place. We tackle tough problems, big and small—but always ones that matter. And, sure, I have good days and bad days. But I can say this: they are rarely boring.

Rory Mir, Associate Director of Community Organizing 

As an organizer at EFF, I juggle long-term projects and needs with rapid responses for both EFF and our local allies in our grassroots network, Electronic Frontier Alliance. Days typically start with morning rituals that keep me grounded as a remote worker: I wake up, make coffee, put on music. I log in, set TODOs, clear my inbox. I get dressed, check the news, morning dog walk..

Back at my desk, I start with small tasks—reach out to a group I met at a conference, add an event to the EFF calendar, and promote EFA events on social media. Then, I get a call from a Portland EFA group. A city ordinance shedding light on police use of surveillance tech needs support. They’re working on a coalition letter EFF can sign, so I send it along to our street level surveillance team, schedule a meeting, and reach out to aligned groups in PDX.

Next up is a policy meeting on consumer privacy. Yesterday in Congress, the House passed a bill undermining privacy (again) and we need to kill it (again). We discuss key Senate votes, and I remember that an EFA group had a good relationship with one of those members in a campaign last year. I reach out to the group with links on our current campaign and see if they can help us lobby on the issue.

After a quick vegan lunch, I start a short Deeplinks post celebrating a major website connecting to the Fediverse, promoting folks autonomy online. I’m not quite done in time for my next meeting, planning an upcoming EFA meetup with my team. Before we get started though, an urgent message from San Diego interrupts us—the city council moved a crucial hearing on ALPRs to tomorrow. We reschedule and pivot to drafting an action alert email for the area as well as social media pushes to rally support.

In the home stretch, I set that meeting with Portland groups and make sure our newest EFA member has information on our workshop next week. After my last meeting for the day, a coalition call on Right to Repair (with Hayley!), I send my blog to a colleague for feedback, and wrap up the day in one of our off-topic chats. While passionately ranking Godzilla movies, my dog helpfully reminds me it’s time to log off and go on another walk.

Thorin Klosowski, Security and Privacy Activist

I typically start my day with reading—catching up on some broad policy things, but just as often poking through product-related news sites and consumer tech blogs—so I can keep an eye out for any new sorts of technology terrors that might be on the horizon, privacy promises that seem too good to be true, or any data breaches and other security guffaws that might need to be addressed.

If I’m lucky (or unlucky, depending on how you look at it), I’ll find something strange enough to bring to our Public Interest Technology crew for a more detailed look. Maybe it’ll be the launch of a new feature that promises privacy but doesn’t seem to deliver it, or in rare cases, a new feature that actually seems to. In either instance, if it seems worth a closer look, I’ll often then chat through all this with the technologists who specialize in the technology at play, then decide whether it’s worth writing something, or just keeping in our deep log of “terrible technologies to watch out for.” This process works in reverse, too—where someone on the PIT team brings up something they’re working on, like sketchyware on an Android tablet, and we’ll brainstorm some ways to help people who’re stuck with these types of things make them less sucky.

Today, I’m also tagging along with a couple of members of the PIT team at a meeting with representatives from a social media company that’s rolling out a new feature in its end-to-end encryption chat app. The EFF technologists will ask smart, technical questions and reference research papers with titles like, “Unbreakable: Designing for Trustworthiness in Private Messaging” while I furiously take notes and wonder how on earth we’ll explain all the positive (or negative) effects on individual privacy this feature might pose if it does in fact release.

With whatever time I have left, I’ll then work on Surveillance Self-Defense, our guide to protecting you and your friends from online spying. Today, I’m working through updating several of our encryption guides, which means chatting with our resident encryption experts both on the legal and PIT teams. What makes SSD so good, in my eyes, is how much knowledge backs every single word of every guide. This is what sets SSD apart from the graveyard of security guides online, but it also means a lot of wrangling to get eyes on everything that goes on the site. Sometimes a guide update clicks together smoothly and we update things quickly. Sometimes one update to a guide cascades across a half dozen others, and I start to feel like I have one of those serial killer boards, but I’m keeping track of several serial killers across multiple timelines. But however an SSD update plays out, it all needs to get translated, so I’ll finish off the day with a look at a spreadsheet of all the translations to make sure I don’t need to send anything new over (or just as often, realize I’ve already gotten translations back that need to put online).

*****

We love giving people a picture of the work we do on a daily basis at EFF to help protect your rights online. Our former Activism Directors, Elliot Harmon and Rainey Reitman, each wrote one of these blogs in the past as well. If you’d like to join us on the EFF Activism Team, or anywhere else in the organization, check out opportunities to do so here.

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

What Can Go Wrong When Police Use AI to Write Reports?

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

U.S. Senate and Biden Administration Shamefully Renew and Expand FISA Section 702, Ushering in a Two Year Expansion of Unconstitutional Mass Surveillance

One week after it was passed by the U.S. House of Representatives, the Senate has passed what Senator Ron Wyden has called, “one of the most dramatic and terrifying expansions of government surveillance authority in history.” President Biden then rushed to sign it into law.  

The perhaps ironically named “Reforming Intelligence and Securing America Act (RISAA)” does everything BUT reform Section 702 of the Foreign Intelligence Surveillance Act (FISA). RISAA not only reauthorizes this mass surveillance program, it greatly expands the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. The bill’s only significant “compromise” is a limited, two-year extension of this mass surveillance. But overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large telecommunications service providers: massive amounts of traffic on the Internet backbone are accessed and those communications on the government’s secret list are copied. And that’s just one part of the massive, expensive program. 

While Section 702 prohibits the NSA and FBI from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally.” The government can then conduct backdoor, warrantless searches of these “incidentally collected” communications.

The government cannot even follow the very lenient rules about what it does with the massive amount of information it gathers under Section 702, repeatedly abusing this authority by searching its databases for Americans’ communications. In 2021 alone, the FBI reported conducting up to 3.4 million warrantless searches of Section 702 data using Americans’ identifiers. Given this history of abuse, it is difficult to understand how Congress could decide to expand the government’s power under Section 702 rather than rein it in.

One of RISAA’s most egregious expansions is its large but ill-defined increase of the range of entities that have to turn over information to the NSA and FBI. This provision allegedly “responds” to a 2023 decision by the FISC Court of Review, which rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While the New York Times reports that the unknown company from this FISC opinion was a data center, this new provision is written so expansively that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider. This could potentially include landlords, maintenance people, and many others who routinely have access to your communications on the interconnected internet.

This is to say nothing of RISAA’s other substantial expansions. RISAA changes FISA’s definition of “foreign intelligence” to include “counternarcotics”: this will allow the government to use FISA to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths,” but also to any of their precursors. While surveillance under FISA has (contrary to what most Americans believe) never been limited exclusively to terrorism and counterespionage, RISAA’s expansion of FISA to ordinary crime is unacceptable.

RISAA also allows the government to use Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released in 2023, the FISC repeatedly denied government attempts to obtain some version of this authority, before finally approving it for the first time in 2023. By formally lowering Section 702’s protections for immigrants and asylum seekers, RISAA exacerbates the risk that government officials could discriminate against members of these populations on the basis of their sexuality, gender identity, religion, or political beliefs.

Faced with massive pushback from EFF and other civil liberties advocates, some members of Congress, like Senator Ron Wyden, raised the alarm. We were able to squeeze out a couple of small concessions. One was a shorter reauthorization period for Section 702, meaning that the law will be up for review in just two more years. Also, in a letter to Congress, the Department of Justice claimed it would only interpret the new provision to apply to the type of unidentified businesses at issue in the 2023 FISC opinion. But a pinky promise from the current Department of Justice is not enforceable and easily disregarded by a future administration. There is some possible hope here, because Senator Mark Warner promised to return to the provision in a later defense authorization bill, but this whole debacle just demonstrates how Congress gives the NSA and FBI nearly free rein when it comes to protecting Americans – any limitation that actually protects us (and here the FISA Court actually did some protecting) is just swept away.

RISAA’s passage is a shocking reversal—EFF and our allies had worked hard to put together a coalition aimed at enacting a warrant requirement for Americans and some other critical reforms, but the NSA, FBI and their apologists just rolled Congress with scary-sounding (and incorrect) stories that a lapse in the spying was imminent. It was a clear dereliction of Congress’s duty to oversee the intelligence community in order to protect all of the rest of us from its long history of abuse.

After over 20 years of doing it, we know that rolling back any surveillance authority, especially one as deeply entrenched as Section 702, is an uphill fight. But we aren’t going anywhere. We had more Congressional support this time than we’ve had in the past, and we’ll be working to build that over the next two years.

Too many members of Congress (and the Administrations of both parties) don’t see any downside to violating your privacy and your constitutional rights in the name of national security. That needs to change.

Fourth Amendment is Not For Sale Act Passed the House, Now it Should Pass the Senate

The Fourth Amendment is Not For Sale Act, H.R.4639, originally introduced in the Senate by Senator Ron Wyden in 2021, has now made the important and historic step of passing the U.S. House of Representatives. In an era when it often seems like Congress cannot pass much-needed privacy protections, this is a victory for vulnerable populations, people who want to make sure their location data is private, and the hard-working activists and organizers who have pushed for the passage of this bill.

Everyday, your personal information is being harvested by your smart phone applications, sold to data brokers, and used by advertisers hoping to sell you things. But what safeguards prevent the government from shopping in that same data marketplace? Mobile data regularly bought and sold, like your geolocation, is information that law enforcement or intelligence agencies would normally have to get a warrant to acquire. But it does not require a warrant for law enforcement agencies to just buy the data. The U.S. government has been using its purchase of this information as a loophole for acquiring personal information on individuals without a warrant.

Now is the time to close that loophole.

At EFF, we’ve been talking about the need to close the databroker loophole for years. We even launched a massive investigation into the data broker industry which revealed Fog Data Science, a company that has claimed in marketing materials that it has “billions” of data points about “over 250 million” devices and that its data can be used to learn about where its subjects work, live, and their associates. We found close to 20 law enforcement agents used or were offered this tool.

It’s time for the Senate to close this incredibly dangerous and invasive loophole. If police want a personor a whole community’slocation data, they should have to get a warrant to see it. 

Take action

TELL congress: 702 Needs serious reforms

Bad Amendments to Section 702 Have Failed (For Now)—What Happens Next?

Yesterday, the House of Representatives voted against considering a largely bad bill that would have unacceptably expanded the tentacles of Section 702 of the Foreign Intelligence Surveillance Act, along with reauthorizing it and introducing some minor fixes. Section 702 is Big Brother’s favorite mass surveillance law that EFF has been fighting since it was first passed in 2008. The law is currently set to expire on April 19. 

Yesterday’s decision not to decide is good news, at least temporarily. Once again, a bipartisan coalition of law makers—led by Rep. Jim Jordan and Rep. Jerrold Nadler—has staved off the worst outcome of expanding 702 mass surveillance in the guise of “reforming” it. But the fight continues and we need all Americans to make their voices heard. 

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: 702 Needs serious reforms

Yesterday’s vote means the House also will not consider amendments to Section 702 surveillance introduced by members of the House Judiciary Committee (HJC) and House Permanent Select Committee on Intelligence (HPSCI). As we discuss below, while the HJC amendments would contain necessary, minimum protections against Section 702’s warrantless surveillance, the HPSCI amendments would impose no meaningful safeguards upon Section 702 and would instead increase the threats Section 702 poses to Americans’ civil liberties.

Section 702 expressly authorizes the government to collect foreign communications inside the U.S. for a wide range of purposes, under the umbrellas of national security and intelligence gathering. While that may sound benign for Americans, foreign communications include a massive amount of Americans’ communications with people (or services) outside the United States. Under the government’s view, intelligence agencies and even domestic law enforcement should have backdoor, warrantless access to these “incidentally collected” communications, instead of having to show a judge there is a reason to query Section 702 databases for a specific American's communications.

Many amendments to Section 702 have recently been introduced. In general, amendments from members of the HJC aim at actual reform (although we would go further in many instances). In contrast, members of HPSCI have proposed bad amendments that would expand Section 702 and undermine necessary oversight. Here is our analysis of both HJC’s decent reform amendments and HPSCI’s bad amendments, as well as the problems the latter might create if they return.

House Judiciary Committee’s Amendments Would Impose Needed Reforms

The most important amendment HJC members have introduced would require the government to obtain court approval before querying Section 702 databases for Americans’ communications, with exceptions for exigency, consent, and certain queries involving malware. As we recently wrote regarding a different Section 702 bill, because Section 702’s warrantless surveillance lacks the safeguards of probable cause and particularity, it is essential to require the government to convince a judge that there is a justification before the “separate Fourth Amendment event” of querying for Americans’ communications. This is a necessary, minimum protection and any attempts to renew Section 702 going forward should contain this provision.

Another important amendment would prohibit the NSA from resuming “abouts” collection. Through abouts collection, the NSA collected communications that were neither to nor from a specific surveillance target but merely mentioned the target. While the NSA voluntarily ceased abouts collection following Foreign Intelligence Surveillance Court (FISC) rulings that called into question the surveillance’s lawfulness, the NSA left the door open to resume abouts collection if it felt it could “work that technical solution in a way that generates greater reliability.” Under current law, the NSA need only notify Congress when it resumes collection. This amendment would instead require the NSA to obtain Congress’s express approval before it can resume abouts collection, which―given this surveillance's past abuses—would be notable.

The other HJC amendment Congress should accept would require the FBI to give a quarterly report to Congress of the number of queries it has conducted of Americans’ communications in its Section 702 databases and would also allow high-ranking members of Congress to attend proceedings of the notoriously secretive FISC. More congressional oversight of FBI queries of Americans’ communications and FISC proceedings would be good. That said, even if Congress passes this amendment (which it should), both Congress and the American public deserve much greater transparency about Section 702 surveillance.  

House Permanent Select Committee on Intelligence’s Amendments Would Expand Section 702

Instead of much-needed reforms, the HPSCI amendments expand Section 702 surveillance.

One HPSCI amendment would add “counternarcotics” to FISA’s definition of “foreign intelligence information,” expanding the scope of mass surveillance even further from the antiterrorism goals that most Americans associate with FISA. In truth, FISA’s definition of “foreign intelligence information” already goes beyond terrorism. But this counternarcotics amendment would further expand “foreign intelligence information” to allow FISA to be used to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths” but also to any of their precursors. Given the massive amount of Americans’ communications the government already collects under Section 702 and the government’s history of abusing Americans’ civil liberties through searching these communications, the expanded collection this amendment would permit is unacceptable.

Another amendment would authorize using Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released last year, the government has sought some version of this authority for years, and the FISC repeatedly denied it—finally approving it for the first time in 2023. The FISC opinion is very redacted, which makes it impossible to know either the current scope of immigration and visa-related surveillance under Section 702 or what the intelligence agencies have sought in the past. But regardless, it’s deeply concerning that HPSCI is trying to formally lower Section 702 protections for immigrants and asylum seekers. We’ve already seen the government revoke people’s visas based upon their political opinions—this amendment would put this kind of thing on steroids.

The last HPSCI amendment tries to make more companies subject to Section 702’s required turnover of customer information in more instances. In 2023, the FISC Court of Review rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While we don’t know the details of the secret proceedings because the FISC Court of Review opinion is heavily redacted, this is an ominous attempt to increase the scope of providers subject to 702. With this amendment, HPSCI is attempting to legislatively overrule a court already famously friendly to the government. HPSCI Chair Mike Turner acknowledged as much in a House Rules Committee hearing earlier this week, stating that this amendment “responds” to the FISC Court of Review’s decision.

What’s Next 

This hearing was unlikely to be the last time Congress considers Section 702 before April 19—we expect another attempt to renew this surveillance authority in the coming days. We’ve been very clear: Section 702 must not be renewed without essential reforms that protect privacy, improve transparency, and keep the program within the confines of the law. 

Take action

TELL congress: 702 Needs serious reforms

The White House is Wrong: Section 702 Needs Drastic Change

With Section 702 of the Foreign Intelligence Surveillance Act set to expire later this month, the White House recently released a memo objecting to the SAFE Act—legislation introduced by Senators Dick Durbin and Mike Lee that would reauthorize Section 702 with some reforms. The White House is wrong. SAFE is a bipartisan bill that may be our most realistic chance of reforming a dangerous NSA mass surveillance program that even the federal government’s privacy watchdog and the White House itself have acknowledged needs reform.

As we’ve written, the SAFE Act does not go nearly far enough in protecting us from the warrantless surveillance the government now conducts under Section 702. But, with surveillance hawks in the government pushing for a reauthorization of their favorite national security law without any meaningful reforms, the SAFE Act might be privacy and civil liberties advocates’ best hope for imposing some checks upon Section 702.

Section 702 is a serious threat to the privacy of those in the United States. It authorizes the collection of overseas communications for national security purposes, and, in a globalized world, this allows the government to collect a massive amount of Americans’ communications. As Section 702 is currently written, intelligence agencies and domestic law enforcement have backdoor, warrantless access to millions of communications from people with clear constitutional rights.

The White House objects to the SAFE Act’s two major reforms. The first requires the government to obtain court approval before accessing the content of communications for people in the United States which have been hoovered up and stored in Section 702 databases—just like police have to do to read your letters or emails. The SAFE Act’s second reform closes the “data broker loophole” by largely prohibiting the government from purchasing personal data they would otherwise need a warrant to collect. While the White House memo is just the latest attempt to scare lawmakers into reauthorizing Section 702, it omits important context and distorts the key SAFE Act amendments’ effects

The government has repeatedly abused Section 702 by searching its databases for Americans’ communications. Every time, the government claims it has learned from its mistakes and won’t repeat them, only for another abuse to come to light years later. The government asks you to trust it with the enormously powerful surveillance tool that is Section 702—but it has proven unworthy of that trust.

The Government Should Get Judicial Approval Before Accessing Americans’ Communications

Requiring the government to obtain judicial approval before it can access the communications of Americans and those in the United States is a necessary, minimum protection against Section 702’s warrantless surveillance. Because Section 702 does not require safeguards of particularity and probable cause when the government initially collects communications, it is essential to require the government to at least convince a judge that there is a justification before the “separate Fourth Amendment event” of the government accessing the communications of Americans it has collected.

The White House’s memo claims that the government shouldn’t need to get court approval to access communications of Americans that were “lawfully obtained” under Section 702. But this ignores the fundamental differences between Section 702 and other surveillance. Intelligence agencies and law enforcement don’t get to play “finders keepers” with our communications just because they have a pre-existing program that warrantlessly vacuums them all up.

The SAFE Act has exceptions from its general requirement of court approval for emergencies, consent, and—for malicious software—“defensive cybersecurity queries.” While the White House memo claims these are “dangerously narrow,” exigency and consent are longstanding, well-developed exceptions to the Fourth Amendment’s warrant requirement. And the SAFE Act gives the government even more leeway than the Fourth Amendment ordinarily does in also excluding “defensive cybersecurity queries” from its requirement of judicial approval.

The Government Shouldn’t Be Able to Buy What It Would Otherwise Need a Warrant to Collect

The SAFE Act properly imposes broad restrictions upon the government’s ability to purchase data—because way too much of our data is available for the government to purchase. Both the FBI and NSA have acknowledged knowingly buying data on Americans. As we’ve written many times, the commercially available information that the government purchases can be very revealing about our most intimate, private communications and associations. The Director of National Intelligence’s own report on government purchases of commercially available information recognizes this data can be “misused to pry into private lives, ruin reputations, and cause emotional distress and threaten the safety of individuals.” This report also recognizes that this data can “disclose, for example, the detailed movements and associations of individuals and groups, revealing political, religious, travel, and speech activities.”

The SAFE Act would go a significant way towards closing the “data broker loophole” that the government has been exploiting. Contrary to the White House’s argument that Section 702 reauthorization is “not the vehicle” for protecting Americans’ data privacy, closing the “data broker loophole” goes hand-in-hand with putting crucial guardrails upon Section 702 surveillance: the necessary reform of requiring court approval for government access to Americans’ communications is undermined if the government is able to warrantlessly collect revealing information about Americans some other way. 

The White House further objects that the SAFE Act does not address data purchases by other countries and nongovernmental entities, but this misses the point. The best way Congress can protect Americans’ data privacy from these entities and others is to pass comprehensive data privacy regulation. But, in the context of Section 702 reauthorization, the government is effectively asking for special surveillance permissions for itself, that its surveillance continue to be subjected to minimal oversight while other other countries’ surveillance practices are regulated. (This has been a pattern as of late.) The Fourth Amendment prohibits intelligence agencies and law enforcement from giving themselves the prerogative to invade our privacy.  

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies

There has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. The fretting has only gotten worse as a result of a U.S. State Department-commissioned report on the security risk of weaponized AI.

Whether these messages come from popular films like a War Games or The Terminator, reports that in digital simulations AI supposedly favors the nuclear option more than it should, or the idea that AI could assess nuclear threats quicker than humans—all of these scenarios have one thing in common: they end with nukes (almost) being launched because a computer either had the ability to pull the trigger or convinced humans to do so by simulating imminent nuclear threat. The purported risk of AI comes not just from yielding “control" to computers, but also the ability for advanced algorithmic systems to breach cybersecurity measures or manipulate and social engineer people with realistic voice, text, images, video, or digital impersonations

But there is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons. This means both denying algorithms ultimate decision making powers, but it also means building in protocols and safeguards so that some kind of generative AI cannot be used to impersonate or simulate the orders capable of launching attacks. It’s really simple, and we’re by far not the only (or the first) people to suggest the radical idea that we just not integrate computer decision making into many important decisions–from deciding a person’s freedom to launching first or retaliatory strikes with nuclear weapons.


First, let’s define terms. To start, I am using "Artificial Intelligence" purely for expediency and because it is the term most commonly used by vendors and government agencies to describe automated algorithmic decision making despite the fact that it is a problematic term that shields human agency from criticism. What we are talking about here is an algorithmic system, fed a tremendous amount of historical or hypothetical information, that leverages probability and context in order to choose what outcomes are expected based on the data it has been fed. It’s how training algorithmic chatbots on posts from social media resulted in the chatbot regurgitating the racist rhetoric it was trained on. It’s also how predictive policing algorithms reaffirm racially biased policing by sending police to neighborhoods where the police already patrol and where they make a majority of their arrests. From the vantage of the data it looks as if that is the only neighborhood with crime because police don’t typically arrest people in other neighborhoods. As AI expert and technologist Joy Buolamwini has said, "With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion... Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made."

Military Tactics Shouldn’t Drive AI Use

As EFF wrote in 2018, “Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.” (You can read EFF’s whole 2018 white paper: The Cautious Path to Advantage: How Militaries Should Plan for AI here

Just like in policing, in the military there must be a compelling directive (not to mention the marketing from eager companies hoping to get rich off defense contracts) to constantly be innovating in order to claim technical superiority. But integrating technology for innovation’s sake alone creates a great risk of unforeseen danger. AI-enhanced targeting is liable to get things wrong. AI can be fooled or tricked. It can be hacked. And giving AI the power to escalate armed conflicts, especially on a global or nuclear scale, might just bring about the much-feared AI apocalypse that can be avoided just by keeping a human finger on the button.


We’ve written before about how necessary it is to ban attempts for police to arm robots (either remote controlled or autonomous) in a domestic context for the same reasons. The idea of so-called autonomy among machines and robots creates the false sense of agency–the idea that only the computer is to blame for falsely targeting the wrong person or misreading signs of incoming missiles and launching a nuclear weapon in response–obscures who is really at fault. Humans put computers in charge of making the decisions, but humans also train the programs which make the decisions.

AI Does What We Tell It To

In the words of linguist Emily Bender,  “AI” and especially its text-based applications, is a “stochastic parrot” meaning that it echoes back to us things we taught it with as “determined by random, probabilistic distribution.” In short, we give it the material it learns, it learns it, and then draws conclusions and makes decisions based on that historical dataset. If you teach an algorithmic model that 9 times out of 10 a nation will launch a retaliatory strike when missiles are fired at them–the first time that model mistakes a flock of birds for inbound missiles, that is exactly what it will do.

To that end, AI scholar Kate Crawford argues, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests.” 

AI does what we teach it to. It mimics the decisions it is taught to make either through hypotheticals or historical data. This means that, yet again, we are not powerless to a coming AI doomsday. We teach AI how to operate. We give it control of escalation, weaponry, and military response. We could just not.

Governing AI Doesn’t Mean Making it More Secret–It Means Regulating Use 

Part of the recent report commissioned by the U.S. Department of State on the weaponization of AI included one troubling recommendation: making the inner workings of AI more secret. In order to keep algorithms from being tampered with or manipulated, the full report (as summarized by Time) suggests that a new governmental regulatory agency responsible for AI should criminalize and make potentially punishable by jail time publishing the inner workings of AI. This means that how AI functions in our daily lives, and how the government uses it, could never be open source and would always live inside a black box where we could never learn the datasets informing its decision making. So much of our lives is already being governed by automated decision making, from the criminal justice system to employment, to criminalize the only route for people to know how those systems are being trained seems counterproductive and wrong.

Opening up the inner workings of AI puts more eyes on how a system functions and makes it more easy, not less, to spot manipulation and tampering… not to mention it might mitigate the biases and harms that skewed training datasets create in the first place.

Conclusion

Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple withbut we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic.

This doesn’t mean that people won’t weaponize AIand already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control. 

❌