Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

EFF to Massachusetts’ Highest Court: Pretrial Electronic Monitoring Should Not Eviscerate Privacy Rights

Par : Hannah Zhao
22 octobre 2024 à 11:58

When someone is placed on location monitoring for one purpose, it does not justify law enforcement’s access to that information for a completely different purpose without a proper warrant. 

EFF joined the Committee for Public Counsel Services, ACLU, ACLU of Massachusetts, and the Massachusetts Association of Criminal Defense Lawyers, in filing an amicus brief in the Massachusetts Supreme Judicial Court, in Commonwealth v. Govan, arguing just that. 

In this case, the defendant Anthony Govan was subjected to pretrial electronic monitoring as a condition of release prior to trial. In investigating a completely unrelated crime, the police asked the pretrial electronic monitoring division for the identity and location of “anyone” who was near the location of this latter incident. Mr. Govan’s data was part of the response, and that information was used against him in this unrelated case. 

Our joint amicus brief highlighted the coercive nature of electronic monitoring programs. When the alternative is being locked up, there is no meaningful consent to the collection of information under electronic monitoring. At the same time, as someone on pretrial release, Mr. Govan had a reasonable expectation of privacy in his location information. As courts, including the U.S. Supreme Court, have recognized, location and movement information are incredibly sensitive and revealing. Just because someone is on electronic monitoring, it doesn’t mean they have no expectation of privacy, whether they are going to a political protest, a prayer group, an abortion clinic, a gun show, or their private home. Pretrial electronic monitoring collects this information around the clock—information that otherwise would not have been available to law enforcement through traditional tools.  

The violation of privacy is especially problematic in this case, because Mr. Govan had not been convicted and is still presumed to be innocent. According to current law, those on pretrial release are entitled to far stronger Fourth Amendment protections than those who are on monitored release after a conviction. As argued in the amicus brief, absent a proper warrant, the information gathered by the electronic monitoring program should only be used to make sure Mr. Govan was complying with his pretrial release conditions. 

Lastly, although this case is decided on the absence of a warrant or a warrant exception, we argued that the court should provide guidance for future warrants. The Fourth Amendment and its state corollaries prohibit “general warrants,” akin to a fishing expedition, and instead require warrants meet nexus and particularity requirements.  Bulk location data requests like the one in this case cannot meet that standard.  

While electronic monitoring is marketed as an alternative to detention, the evidence does not bear this out. Courts should not allow the government to use the information gathered from this expansion of state surveillance to be used beyond its purpose without a warrant.

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here. 

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

Backyard Privacy in the Age of Drones

Par : Hannah Zhao
27 août 2024 à 11:12

This article was originally published by The Legal Aid Society's Decrypting a Defense Newsletter on August 5, 2024 and is reprinted here with permission.

Police departments and law enforcement agencies are increasingly collecting personal information using drones, also known as unmanned aerial vehicles. In addition to high-resolution photographic and video cameras, police drones may be equipped with myriad spying payloads, such as live-video transmitters, thermal imaging, heat sensors, mapping technology, automated license plate readers, cell site simulators, cell phone signal interceptors and other technologies. Captured data can later be scrutinized with backend software tools like license plate readers and face recognition technology. There have even been proposals for law enforcement to attach lethal and less-lethal weapons to drones and robots. 

Over the past decade or so, police drone use has dramatically expanded. The Electronic Frontier Foundation’s Atlas of Surveillance lists more than 1500 law enforcement agencies across the US that have been reported to employ drones. The result is that backyards, which are part of the constitutionally protected curtilage of a home, are frequently being captured, either intentionally or incidentally. In grappling with the legal implications of this phenomenon, we are confronted by a pair of U.S. Supreme Court cases from the 1980s: California v. Ciraolo and Florida v. Riley. There, the Supreme Court ruled that warrantless aerial surveillance conducted by law enforcement in low-flying manned aircrafts did not violate the Fourth Amendment because there was no reasonable expectation of privacy from what was visible from the sky. Although there are fundamental differences between surveillance by manned aircrafts and drones, some courts have extended the analysis to situations involving drones, shutting the door to federal constitution challenges.

Yet, Americans, legislators, and even judges, have long voiced serious worries with the threat of rampant and unchecked aerial surveillance. A couple of years ago, the Fourth Circuit found in Leaders of a Beautiful Struggle v. Baltimore Police Department that a mass aerial surveillance program (using manned aircrafts) covering most of the city violated the Fourth Amendment. The exponential surge in police drone use has only heightened the privacy concerns underpinning that and similar decisions. Unlike the manned aircrafts in Ciraolo and Riley, drones can silently and unobtrusively gather an immense amount of data at only a tiny fraction of the cost of traditional aircrafts. Additionally, drones are smaller and easier to operate and can get into spaces—such as under eaves or between buildings—that planes and helicopters can never enter. And the noise created by manned airplanes and helicopters effectively functions as notice to those who are being watched, whereas drones can easily record information surreptitiously.

In response to the concerns regarding drone surveillance voiced by civil liberties groups and others, some law enforcement agencies, like the NYPD, have pledged to abide by internal policies to refrain from warrantless use over private property. But without enforcement mechanisms, those empty promises are easily discarded by officials when they consider them inconvenient, as NYC Mayor Eric Adams did in announcing that drones would, in fact, be deployed to indiscriminately spy on backyard parties over Labor Day.

Barring a seismic shift away from Ciraolo and Riley by the U.S. Supreme Court (which seems nigh impossible given the Fourth Amendment approach by the current members of the bench), protection from warrantless aerial surveillance—and successful legal challenges—will have to come from the states. Indeed, six months after Ciraolo was decided, the California Supreme Court held in People v. Cook that under the state’s constitution, an individual had a reasonable expectation that cops will not conduct warrantless surveillance of their backyard from the air. More recently, other states, such as Hawai’i, Vermont, and Alaska, have similarly relied on their state constitution’s Fourth Amendment corollary to find warrantless aerial surveillance improper. Some states have also passed new laws regulating governmental drone use. And at least half a dozen states, including Florida, Maine, Minnesota, Nevada, North Dakota, and Virginia have statutes requiring warrants (with exceptions) for police use.

Law enforcement’s use of drones will only proliferate in the coming years, and drone capabilities continue to evolve rapidly. Courts and legislatures must keep pace to ensure that privacy rights do not fall victim to the advancement of technology.

For more information on drones and other surveillance technologies, please visit EFF’s Street Level Surveillance guide at https://sls.eff.org/.

2 Fast 2 Legal: How EFF Helped a Security Researcher During DEF CON 32

This year, like every year, EFF sent a variety of lawyers, technologists, and activists to the summer security conferences in Las Vegas to help foster support for the security research community. While we were at DEF CON 32, security researcher Dennis Giese received a cease-and-desist letter on a Thursday afternoon for his talk scheduled just hours later for Friday morning. EFF lawyers met with Dennis almost immediately, and by Sunday, Dennis was able to give his talk. Here’s what happened, and why the fight for coders’ rights matters.

Throughout the year, we receive a number of inquiries from security researchers who seek to report vulnerabilities or present on technical exploits and want to understand the legal risks involved. Enter the EFF Coders’ Rights Project, designed to help programmers, tinkerers, and innovators who wish to responsibly explore technologies and report on those findings. Our Coders Rights lawyers counsel many of those who reach out to us on anything from mitigating legal risk in their talks, to reporting vulnerabilities they’ve found, to responding to legal threats. The number of inquiries often ramp up in the months leading to “hacker summer camp,” but we usually have at least a couple of weeks to help and advise the researcher.

In this case, however, we did our work on an extremely short schedule.

Dennis is a prolific researcher who has presented his work at conferences around the world. At DEF CON, one of the talks he planned along with a co-presenter involved digital locks, including the vendor Digilock. In the months leading up to the presentation, Dennis shared his findings with Digilock and sought to discuss potential remediations. Digilock expressed interest in these conversations, so it came as a surprise when the company sent him the cease-and-desist letter on the eve of the presentation raising a number of baseless legal claims.

Because we had lawyers on the ground at DEF CON, Dennis was able to connect with EFF soon after receiving the cease-and-desist and, along with former EFF attorney and current Special Counsel to EFF, Kurt Opsahl, we agreed to represent him in responding to Digilock. Over the course of forty-eight hours, we were able to meet with Digilock’s lawyers and ultimately facilitated a productive conversation between Dennis and its CEO.

Good-faith security researchers increase security for all of us.

To its credit, Digilock agreed to rescind the cease-and-desist letter and also provided Dennis with useful information about its plans to address vulnerabilities discussed in his research.

Dennis was able to give the talk, with this additional information, on Sunday, the last day of DEF CON.

We are proud we could help Dennis navigate what can be a scary situation of receiving last-minute legal threats, and are happy that he was ultimately able to give his talk. Good-faith security researchers like Dennis increase security for all of us who use digital devices. By identifying and disclosing vulnerabilities, hackers are able to improve security for every user who depends on information systems for their daily life and work. If we do not know about security vulnerabilities, we cannot fix them, and we cannot make better computer systems in the future. Dennis’s research was not only legal, it demonstrated real world problems that the companies involved need to address.

Just as important as discovering security vulnerabilities is reporting the findings so that users can protect themselves, vendors can avoid introducing vulnerabilities in the future, and other security researchers can build off that information. By publicly explaining these sorts of attacks and proposing remedies, other companies that make similar devices can also benefit by fixing these vulnerabilities. In discovering and reporting on their findings, security researchers like Dennis help build a safer future for all of us.

However, this incident reminds us that even good faith hackers are often faced with legal challenges meant to silence them from publicly sharing the legitimate fruits of their labor. The Coders' Rights Project is part of our long standing work to protect researchers through legal defense, education, amicus briefs, and involvement in the community. Through it, we hope to promote innovation and safeguard the rights of curious tinkerers and hackers everywhere.

We must continue to fight for the right to share this research, which leads to better security for us all. If you are a security researcher in need of legal assistance or have concerns before giving a talk, do not hesitate to reach out to us. If you'd like to support more of this work, please consider donating to EFF.

The Alaska Supreme Court Takes Aerial Surveillance’s Threat to Privacy Seriously, Other Courts Should Too

Par : Hannah Zhao
29 mai 2024 à 18:16

In March, the Alaska Supreme Court held in State v. McKelvey that the Alaska Constitution required law enforcement to obtain a warrant before photographing a private backyard from an aircraft. In this case, the police took photographs of Mr. McKelvey’s property, including the constitutionally protected curtilage area, from a small aircraft using a zoom lens.

In arguing that Mr. McKelvey did not have a reasonable expectation of privacy, the government raised various factors which have been used to justify warrantless surveillance in other jurisdictions. These included the ubiquity of small aircrafts flying overhead in Alaska; the commercial availability of the camera and lens; the availability of aerial footage of the land elsewhere; and the alleged unobtrusive nature of the surveillance. 

In response, the Court divorced the ubiquity and availability of the technology from whether people would reasonably expect the government to use it to spy on them. The Court observed that the fact the government spent resources to take photos demonstrates that whatever available images were insufficient for law enforcement needs. Also, the inability or unlikelihood the spying was detected adds to, not detracts from, its pernicious nature because “if the surveillance technique cannot be detected, then one can never fully protect against being surveilled.” 

Throughout its analysis, the Alaska Supreme Court demonstrated a grounded understanding of modern technology—as well as its future—and its effect on privacy rights. At the outset, the Court pointed out that one might think that this warrantless aerial surveillance was not a significant threat to privacy rights because "aviation gas is expensive, officers are busy, and the likelihood of detecting criminal activity with indiscriminate surveillance flights is low." However, the Court added pointedly, “the rise of drones has the potential to change that equation." We made similar arguments and are glad to see that courts are taking the threat seriously. 

This is a significant victory for Alaskans and their privacy rights, and stands in contrast to a couple of U.S. Supreme Court cases from the 1980s, Ciraolo v. California and Florida v. Riley. In those cases, the justices found no violation of the federal constitution for aerial surveillance from low-flying manned aircrafts. But there have been seismic changes in the capabilities of surveillance technology since those decisions, and courts should consider these developments rather than merely applying precedents uncritically. 

With this decision, Alaska joins California, Hawaii, and Vermont in finding that warrantless aerial surveillance violates their state’s constitutional prohibition of unreasonable search and seizure. Other courts should follow suit to ensure that privacy rights do not fall victim to the advancement of technology.

Shots Fired: Congressional Letter Questions DHS Funding of ShotSpotter

There is a growing pile of evidence that cities should drop Shotspotter, the notorious surveillance system that purportedly uses acoustic sensors to detect gunshots, due to its inaccuracies and the danger it creates in communities where it’s installed. In yet another blow to the product and the surveillance company behind it—SoundThinking—Congress members have sent a letter calling on the Department of Homeland Security to investigate how it provides funding to local police to deploy the product.

The seven page letter, from Senators Ed Markey, Ron Wyden and Elizabeth Warren, and Representative Ayanna Pressley, begins by questioning the “accuracy and effectiveness” of ShotSpotter, and then outlines some of the latest evidence of its abysmal performance, including multiple studies showing false positive rates—i.e. incorrectly classifying non-gunshot sounds as gunshots—at 70% or higher. In addition to its ineffectiveness, the Congress members voiced their serious concerns regarding ShotSpotter’s contribution to discrimination, civil rights violations, and poor policing practices due to the installation of most ShotSpotter sensors in overwhelmingly “Black, Brown and Latin[e] communities” at the request of local law enforcement. Together, the inefficacy of the technology and the placements can result in the deployment of police to what they expect to be a dangerous situation with guns drawn, increasing the chances of all-too-common police violence against civilians in the area.

In light of the grave concerns raised by the use of ShotSpotter, the lawmakers are demanding that DHS investigate its funding, and whether it’s an appropriate use of taxpayer dollars. We agree: DHS should investigate, and should end its program of offering grants to local law enforcement agencies to contract with SoundThinking. 

The letter can be read in its entirety here.

EFF Submits Comments on FRT to Commission on Civil Rights

Par : Hannah Zhao
12 avril 2024 à 18:06

Our faces are often exposed and, unlike passwords or pin numbers, cannot be remade. Governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. This is why EFF recently submitted comments to the U.S. Commission on Civil Rights, which is preparing a report on face recognition technology (FRT).   

In our submission, we reiterated our stance that there should be a ban on governmental use of FRT and strict regulations on private use because it: (1) is not reliable enough to be used in determinations affecting constitutional and statutory rights or social benefits; (2) is a menace to social justice as its errors are far more pronounced when applied to people of color, members of the LGBTQ+ community, and other marginalized groups; (3) threatens privacy rights; (4) chills and deters expression; and (5) creates information security risks.

Despite these grave concerns, FRT is being used by the government and law enforcement agencies with increasing frequency, and sometimes with devastating effects. At least one Black woman and five Black men have been wrongfully arrested due to misidentification by FRT: Porcha Woodruff, Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, and Robert Williams. And Harvey Murphy Jr., a white man, was wrongfully arrested due to FRT misidentification, and then sexually assaulted while in jail.

Even if FRT was accurate, or at least equally inaccurate across demographics, it would still severely impact our privacy and security. We cannot change our face, and we expose it to the mass surveillance networks already in place every day we go out in public. But doing that should not be license for the government or private entities to make imprints of our face and retain that data, especially when that data may be breached by hostile actors.

The government should ban its own use of FRT, and strictly limit private use, to protect us from the threats posed by FRT. 

The Government Shouldn’t Prosecute People With Unreliable “Black Box” Technology

Par : Hannah Zhao
30 novembre 2023 à 13:50

On Tuesday, EFF urged the Massachusetts Supreme Judicial Court, the highest court in that state, to affirm that a witness who has no knowledge of the proprietary algorithm used in black box technology is not qualified to testify to its reliability. We filed this amicus brief in Commonwealth v. Arrington together with the American Civil Liberties Union, the American Civil Liberties Union of Massachusetts, the National Association of Criminal Defense Lawyers, and the Massachusetts Association of Criminal Defense Lawyers. 

At issue is the iPhone’s “frequent location history” (FLH), a location estimate generated by Apple’s proprietary algorithm that has never been used in Massachusetts courts before. Generally, for information generated by a new technology to be used as evidence in a case, there must be a finding that the technology is sufficiently reliable.  

In this case, the government presented a witness who had only looked at 23 mobile devices, and there was no indication that any of them involved FLH. The witness also stated he had no idea how the FLH algorithm worked, and he had no access to Apple’s proprietary technology. The lower court correctly found that this witness was not qualified to testify on the reliability of FLH, and that the government had failed to demonstrate FLH had met the standard to be used as evidence against the defendant. 

The Massachusetts Supreme Judicial Court should affirm this ruling. Courts serve a “gatekeeper” function by determining the type of evidence that can appear before a jury at trial. Only evidence that is sufficiently reliable to be relevant should be admissible. If the government wants to present information that is derived from new technology, they need to prove that it’s reliable. When they can’t, courts shouldn’t let them use the output of black box tech to prosecute you. 

The use of these tools raises many concerns, including defendants’ constitutional rights to access the evidence against them, as well as the reliability of the underlying technology in the first place. As we’ve repeatedly pointed out before, many new technologies sought to be used by prosecutors have been plagued with serious flaws. These flaws can especially disadvantage members of marginalized communities. Robust standards for technology used in criminal cases are necessary, as they can result in decades of imprisonment—or even the death penalty. 

EFF continues to fight against governmental use of secret software and opaque technology in criminal cases. We hope that the Supreme Judicial Court will follow other jurisdictions in upholding requirements that favor disclosure and access to information regarding proprietary technology used in the criminal justice system.   

❌
❌