Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Amazon and Google Must Keep Their Promises on Project Nimbus

2 décembre 2024 à 14:52

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

AI in Criminal Justice Is the Trend Attorneys Need to Know About

Par : Beryl Lipton
5 novembre 2024 à 17:00

The integration of artificial intelligence (AI) into our criminal justice system is one of the most worrying developments across policing and the courts, and EFF has been tracking it for years. EFF recently contributed a chapter on AI’s use by law enforcement to the American Bar Association’s annual publication, The State of Criminal Justice 2024.

The chapter describes some of the AI-enabled technologies being used by law enforcement, including some of the tools we feature in our Street-Level Surveillance hub, and discusses the threats AI poses to due process, privacy, and other civil liberties.

Face recognition, license plate readers, and gunshot detection systems all operate using forms of AI, all enabling broad, privacy-deteriorating surveillance that have led to wrongful arrests and jail time through false positives. Data streams from these tools—combined with public records, geolocation tracking, and other data from mobile phones—are being shared between policing agencies and used to build increasingly detailed law enforcement profiles of people, whether or not they’re under investigation. AI software is being used to make black box inferences and connections between them. A growing number of police departments have been eager to add AI to their arsenals, largely encouraged by extensive marketing by the companies developing and selling this equipment and software. 

As AI facilitates mass privacy invasion and risks routinizing—or even legitimizing—inequalities and abuses, its influence on law enforcement responsibilities has important implications for the application of the law, the protection of civil liberties and privacy rights, and the integrity of our criminal justice system,” EFF Investigative Researcher Beryl Lipton wrote.

The ABA’s 2024 State of Criminal Justice publication is available from the ABA in book or PDF format.

Civil Rights Commission Pans Face Recognition Technology

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

Digital License Plates and the Deal That Never Had a Chance

Location and surveillance technology permeates the driving experience. Setting aside external technology like license plate readers, there is some form of internet-connected service or surveillance capability built into or on many cars, from GPS tracking to oil-change notices. This is already a dangerous situation for many drivers and passengers, and a bill in California requiring GPS-tracking in digital license plates would put us further down this troubling path. 

In 2022, EFF fought along with other privacy groups, domestic violence organizations, and LGBTQ+ rights organizations to prevent the use of GPS-enabled technology in digital license plates. A.B. 984, authored by State Assemblymember Lori Wilson and sponsored by digital license plate company Reviver, originally would have allowed for GPS trackers to be placed in the digital license plates of personal vehicles. As we have said many times, location data is very sensitive information, because where we go can also reveal things we'd rather keep private even from others in our household. Ultimately, advocates struck a deal with the author to prohibit location tracking in passenger cars, and this troubling flaw was removed. Governor Newsom signed A.B. 984 into law. 

Now, not even two years later, the state's digital license plate vendor, Reviver, and Assemblymember Wilson have filed A.B. 3138, which directly undoes the deal from 2022 and explicitly calls for location tracking in digital license plates for passenger cars. 

To best protect consumers, EFF urges the legislature to not approve A.B. 3138. 

Consumers Could Face Serious Concerns If A.B. 3138 Becomes Law

In fact, our concerns about trackers in digital plates are stronger than ever. Recent developments have made location data even more ripe for misuse.

  • People traveling to California from a state that criminalizes abortions may be unaware that the rideshare car they are in is tracking their trip to a Planned Parenthood via its digital license plate. This trip may generate location data that can be used against them in a state where abortion is criminalized.
  • Unsupportive parents of queer youth could use GPS-loaded plates to monitor or track whether teens are going to local support centers or events.
  • U.S. Immigration and Customs Enforcement (ICE) could use GPS surveillance technology to locate immigrants, as it has done by exploiting ALPR location data exchange between local police departments and ICE to track immigrants’ movements.  The invasiveness of vehicle location technology is part of a large range of surveillance technology that is at the hands of ICE to fortify their ever-growing “virtual wall.” 
  • There are also serious implications in domestic violence situations, where GPS tracking has been investigated and found to be used as a tool of abuse and coercion by abusive partners. Most recently, two Kansas City families are jointly suing the company Spytec GPS after its technology was used in a double-murder suicide, in which a man used GPS trackers to find and kill his ex-girlfriend, her current boyfriend, and then himself. The families say the lawsuit is, in part, to raise awareness about the danger of making this technology and location information more easily available. There's no reason to make tracking any easier by embedding it in state-issued plates. 

We Urge the Legislature to Reject A.B. 3138  

Shortly after California approved Reviver to provide digital license plates to commercial vehicles under A.B. 984, the company experienced a security breach where it was possible for hackers to use GPS in real time to track vehicles with a Reviver digital license plate. Privacy issues aside,  this summer, the state of Michigan also terminated their two-year old contract with Reviver for the company’s failure to follow state law and its contractual obligations. This has forced 1,700 Michigan drivers to go back to a traditional metal license plate.

Reviver is the only company that currently has state authorization to sell digital plates in California, and is the primary advocate for allowing tracking in passenger vehicle plates. The company says its goal is to modernize personalization and safety with digital license plate technology for passenger vehicles. But they haven't proven themselves up to the responsibility of protecting this data. 

A.B. 3138 functionally gives drivers one choice for a digital license plate vendor, and that vendor failed once to competently secure the location data collected by its products. It has now failed to meet basic contractual obligations with a state agency. California lawmakers should think carefully about the clear dangers of vehicle location tracking, and whether we can trust this company to protect the sensitive location information for vulnerable populations, or for any Californian.  

Four Actions You Can Take To Protect Digital Rights this International Women’s Day

This International Women’s Day, defend free speech, fight surveillance, and support innovation by calling on our elected politicians and private companies to uphold our most fundamental rights—both online and offline.

1. Pass the “My Body, My Data” Act

Privacy fears should never stand in the way of healthcare. That's why this common-sense federal bill, sponsored by U.S. Rep. Sara Jacobs, will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for. The protected information includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. It also lets people take on companies that violate their privacy with a strong private right of action.

2. Ban Government Use of Face Recognition

Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Because of face recognition errors, a Black woman, Porcha Woodruff, was wrongfully arrested, and another, Lamya Robinson, was wrongfully kicked out of a roller rink.

Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations, including to disparately surveil people of color. At the local, state, and federal level, people across the country are urging politicians to ban the government’s use of face surveillance because it is inherently invasive, discriminatory, and dangerous. Many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology. 

3. Tell Congress: Don’t Outlaw Encrypted Apps

Advocates of women's equality often face surveillance and repression from powerful interests. That's why they need strong end-to-end encryption. But if the so-called “STOP CSAM Act” passes, it would undermine digital security for all internet users, impacting private messaging and email app providers, social media platforms, cloud storage providers, and many other internet intermediaries and online services. Free speech for women’s rights advocates would also be at risk. STOP CSAM would also create a carveout in Section 230, the law that protects our online speech, exposing platforms to civil lawsuits for merely hosting a platform where part of the illegal conduct occurred. Tell Congress: don't pass this law that would undermine security and free speech online, two critical elements for fighting for equality for all genders.  

4. Tell Facebook: Stop Silencing Palestine

Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI) have led to Palestinian content and accounts being removed and banned at an unprecedented scale. As Palestinians and their supporters have taken to social platforms to share images and posts about the situation in the Gaza strip, some have noticed their content suddenly disappear, or had their posts flagged for breaches of the platforms’ terms of use. In some cases, their accounts have been suspended, and in others features such liking and commenting have been restricted

This has an exacerbated impact for the most at risk groups in Gaza, such as those who are pregnant or need reproductive healthcare support, as sharing information online is both an avenue to communicating the reality with the world, as well as sharing information with others who need it the most.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Voices You Should Hear this International Women’s Day

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

18 décembre 2023 à 10:12

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Colorado Supreme Court Upholds Keyword Search Warrant

Today, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase. In a weak and ultimately confusing opinion, the court upheld the warrant, finding the police relied on it in good faith. EFF filed two amicus briefs and was heavily involved in the case.

The case is People v. Seymour, which involved a tragic home arson that killed several people. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.

Like geofence warrants, keyword warrants cast a dragnet that require a provider to search its entire reserve of user data—in this case, queries by one billion Google users. Police generally have no identified suspects; instead, the sole basis for the warrant is the officer’s hunch that the suspect might have searched for something in some way related to the crime.

Keyword warrants rely on the fact that it is virtually impossible to navigate the modern Internet without entering search queries into a search engine like Google's. By some accounts, there are over 1.15 billion websites, and tens of billions of webpages. Google Search processes as many as 100,000 queries every second. Many users have come to rely on search engines to such a degree that they routinely search for the answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant, even friends, family members, doctors, or clergy. Over the course of months and years, there is little about a user’s life that will not be reflected in their search keywords, from the mundane to the most intimate. The result is a vast record of some of users’ most private and personal thoughts, opinions, and associations.

In the Seymour opinion, the four-justice majority recognized that people have a constitutionally-protected privacy interest in their internet search queries and that these queries impact a person’s free speech rights. The federal Supreme Court has held that warrants like this one that target speech are highly suspect so courts must apply constitutional search-and-seizure requirements with “scrupulous exactitude.” Despite recognizing this directive to engage in careful, in-depth analysis, the Seymour majority’s reasoning was cursory and at points mistaken. For example, although the court found that the Colorado constitution protects users’ privacy interests in their search queries, it held that the Fourth Amendment does not, due to the third party doctrine, because federal courts have held that there is no expectation of privacy in IP addresses. However, this overlooks the queries themselves, which many courts have suggested are more akin to the location information that was found to be protected in Carpenter v. United States. Similarly, the Colorado court neglected to address the constitutionality of Google’s initial search of all its users’ search queries because it found that the things seized—users’ queries and IP addresses—were sufficiently narrow. Finally, the court merely assumed without deciding that the warrant lacked probable cause, a shortcut that allowed the court to overlook the warrant's facial deficiency and therefore uphold it on the “good faith exception.”

If the majority had truly engaged with the deep constitutional issues presented by this keyword warrant, it would have found, as the three-justices dissenting on this point did, that keyword warrants “are tantamount to a high-tech version of the reviled ‘general warrants’ that first gave rise to the protections in the Fourth Amendment.” They lack probable cause because a mere hunch that some unknown person might have searched for a specific phrase related to the crime is insufficient to support a search of everyone’s search queries, let alone a specific, previously unnamed individual. And keyword warrants are insufficiently particular because they do next to nothing to narrow the universe of the search.

We are disappointed in the result in this case. Keyword warrants not only have the potential to implicate innocent people, they allow the government to target people for sensitive search terms like the drug mifepristone, or the names of gender-affirming healthcare providers, or information about psychedelic drugs. Even searches that refer to crimes or acts of terror are not themselves criminal in all or even most cases (otherwise historians, reporters, and crime novelists could all be subject to criminal investigation). Dragnet warrants that target speech have no place in a democracy, and we will continue to challenge them in the courts and to support legislation to ban them entirely.

❌
❌