Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 mars 2024 à 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Sen. Wyden Exposes Data Brokers Selling Location Data to Anti-Abortion Groups That Target Abortion Seekers

27 février 2024 à 19:58

This post was written by Jack Beck, an EFF legal intern

In a recent letter to the FTC and SEC, Sen. Ron Wyden (OR) details new information on data broker Near, which sold the location data of people seeking reproductive healthcare to anti-abortion groups. Near enabled these groups to send targeted ads promoting anti-abortion content to people who had visited Planned Parenthood and similar clinics.

In May 2023, the Wall Street Journal reported that Near was selling location data to anti-abortion groups. Specifically, the Journal found that the Veritas Society, a non-profit established by Wisconsin Right to Life, had hired ad agency Recrue Media. That agency purchased location data from Near and used it to target anti-abortion messaging at people who had sought reproductive healthcare.

The Veritas Society detailed the operation on its website (on a page that was taken down but saved by the Internet Archive) and stated that it delivered over 14 million ads to people who visited reproductive healthcare clinics. These ads appeared on Facebook, Instagram, Snapchat, and other social media for people who had sought reproductive healthcare.

When contacted by Sen. Wyden’s investigative team, Recrue staff admitted that the agency used Near’s website to literally “draw a line” around areas their client wanted them to target. They drew these lines around reproductive health care facilities across the country, using location data purchased from Near to target visitors to 600 Planned Parenthood different locations. Sen. Wyden’s team also confirmed with Near that, until the summer of 2022, no safeguards were in place to protect the data privacy of people visiting sensitive places.

Moreover, as Sen. Wyden explains in his letter, Near was selling data to the government, though it claimed on its website to be doing no such thing. As of October 18, 2023, Sen. Wyden’s investigation found Near was still selling location data harvested from Americans without their informed consent.

Near’s invasion of our privacy shows why Congress and the states must enact privacy-first legislation that limits how corporations collect and monetize our data. We also need privacy statutes that prevent the government from sidestepping the Fourth Amendment by purchasing location information—as Sen. Wyden has proposed. Even the government admits this is a problem.  Furthermore, as Near’s misconduct illustrates, safeguards must be in place that protect people in sensitive locations from being tracked.

This isn’t the first time we’ve seen data brokers sell information that can reveal visits to abortion clinics. We need laws now to strengthen privacy protections for consumers. We thank Sen. Wyden for conducting this investigation. We also commend the FTC’s recent bar on a data broker selling sensitive location data. We hope this represents the start of a longstanding trend.

FTC Bars X-Mode from Selling Sensitive Location Data

23 janvier 2024 à 18:51

Update, January 23, 2024: Another week, another win! The FTC announced a successful enforcement action against another location data broker, InMarket.

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals. The company’s customers include marketers and government contractors.

The FTC’s proposed order contains a strong set of rules to protect the public from this company.

General rules for all location data:

  • X-Mode cannot collect, use, maintain, or disclose a person’s location data absent their opt-in consent. This includes location data the company collected in the past.
  • The order defines “location data” as any data that may reveal the precise location of a person or their mobile device, including from GPS, cell towers, WiFi, and Bluetooth.
  • X-Mode must adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.
  • X-Mode must, on request of a person, delete their location data, and inform them of every entity that received their location data.

Heightened rules for sensitive location data:

  • X-Mode cannot sell, disclose, or use any “sensitive” location data.
  • The order defines “sensitive” locations to include medical facilities (such as family planning centers), religious institutions, union offices, schools, shelters for domestic violence survivors, and immigrant services.
  • To implement this rule, the company must develop a comprehensive list of sensitive locations.
  • However, X-Mode can use sensitive location data if it has a direct relationship with a person related to that data, the person provides opt-in consent, and the company uses the data to provide a service the person directly requested.

As the FTC Chair and Commissioners explain in a statement accompanying this order’s announcement:

The explosion of business models that monetize people’s personal information has resulted in routine trafficking and marketing of Americans’ location data. As the FTC has stated, openly selling a person’s location data the highest bidder can expose people to harassment, stigma, discrimination, or even physical violence. And, as a federal court recently recognized, an invasion of privacy alone can constitute “substantial injury” in violation of the law, even if that privacy invasion does not lead to further or secondary harm.

X-Mode has disputed the implications of the FTC’s statements regarding the settlement, and asserted that the FTC did not find an instance of data misuse.

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.” The FTC has laid out a powerful case that X-Mode’s brokering of location data is unfair and thus unlawful.

The FTC’s enforcement action against X-Mode sends a strong signal that other location data brokers should take a hard look at their own business model or risk similar legal consequences.

The FTC has recently taken many other welcome actions to protect data privacy from corporate surveillance. In 2023, the agency limited Rite Aid’s use of face recognition, and fined Amazon’s Ring for failing to secure its customers’ data. In 2022, the agency brought an unfair business practices claim against another location data broker, Kochava, and began exploring issuance of new rules against commercial data surveillance.

Is Your State’s Child Safety Law Unconstitutional? Try Comprehensive Data Privacy Instead

Comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harm they do to children. Well-written privacy legislation has the added benefit of being constitutional—unlike the flurry of laws that restrict content behind age verification requirements that courts have recently blocked. Such misguided laws do little to protect kids while doing much to invade everyone’s privacy and speech.

Courts have issued preliminary injunctions blocking laws in Arkansas, California, and Texas because they likely violate the First Amendment rights of all internet users. EFF has warned that such laws were bad policy and would not withstand court challenges. Nonetheless, different iterations of these child safety proposals continue to be pushed at the state and federal level.

The answer is to re-focus attention on comprehensive data privacy legislation, which would address the massive collection and processing of personal data that is the root cause of many problems online. Just as important, it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

It Is Comparatively Easy to Write Data Privacy Laws That Are Constitutional

EFF has long pushed for strong comprehensive commercial data privacy legislation and continues to do so. Data privacy legislation has many components. But at its core, it should minimize the amount of personal data that companies process, give users certain rights to control their personal data, and allow consumers to sue when the law is violated.

EFF has argued that privacy laws pass First Amendment muster when they have a few features that ensure the law reasonably fits its purpose. First, they regulate the commercial processing of personal data. Second, they do not impermissibly restrict the truthful publication of matters of public concern. And finally, the government’s interest and law’s purpose is to protect data privacy; expand the free expression that privacy enables; and protect the security of data against insider threats, hacks, and eventual government surveillance. If so, the privacy law will be constitutional if the government shows a close fit between the law’s goals and its means.

EFF made this argument in support of the Illinois Biometric Information Privacy Act (BIPA), and a law in Maine that limits the use and disclosure of personal data collected by internet service providers. BIPA, in particular, has proved wildly important to biometric privacy. For example, it led to a settlement that prohibits the company Clearview AI from selling its biometric surveillance services to law enforcement in the state. Another settlement required Facebook to pay hundreds of millions of dollars for its policy (since repealed) of extracting faceprints from users without their consent.

Courts have agreed. Privacy laws that have been upheld under the First Amendment, or cited favorably by courts, include those that regulate biometric data, health data, credit reports, broadband usage data, phone call records, and purely private conversations.

The Supreme Court, for example, has cited the federal 1996 Health Insurance Portability and Accountability Act (HIPAA) as an example of a “coherent” privacy law, even when it struck down a state law that targeted particular speakers and viewpoints. Additionally, when evaluating the federal Wiretap Act, the Supreme Court correctly held that the law cannot be used to prevent a person from publishing legally obtained communications on matters of public concern. But it otherwise left in place the wiretap restrictions that date back to 1934, designed to protect the confidentiality of private conversations.

It Is Nearly Impossible to Write Age Verification Requirements That Are Constitutional. Just Ask Arkansas, California, and Texas

Federal Courts have recently granted preliminary injunctions that block laws in Arkansas, California, and Texas from going into effect because they likely violate the First Amendment rights of all internet users. While the laws differ from each other, they all require (or strongly incentivize) age verification for all internet users.

The Arkansas law requires age verification for users of certain social media companies, which EFF strongly opposes, and bans minors from those services without parental consent. The court blocked it. The court reasoned that the age verification requirement would deter everyone from accessing constitutionally protected speech and burden anonymous speech. EFF and ACLU filed an amicus brief against this Arkansas law.

In California, a federal court recently blocked the state’s Age-Appropriate Design Code (AADC) under the First Amendment. Significantly, the AADC strongly incentivized websites to require users to verify their age. The court correctly found that age estimation is likely to “exacerbate” the problem of child security because it requires everyone “to divulge additional personal information” to verify their age. The court blocked the entire law, even some privacy provisions we’d like to see in a comprehensive privacy law if they were not intertwined with content limitations and age-gating. EFF does not agree with the court’s reasoning in its entirety because it undervalued the state’s legitimate interest in and means of protecting people’s privacy online. Nonetheless, EFF originally asked the California governor to veto this law, believing that true data privacy legislation has nothing to do with access restrictions.

The Texas law requires age verification for users of websites that post sexual material, and exclusion of minors. The law also requires warnings about sexual content that the court found unsupported by evidence. The court held both provisions are likely unconstitutional. It explained that the age verification requirement, in particular, is “constitutionally problematic because it deters adults’ access to legal sexually explicit material, far beyond the interest of protecting minors.” EFF, ACLU, and other groups filed an amicus brief against this Texas law.

Support Comprehensive Privacy Legislation That Will Stand the Test of Time

Courts will rightly continue to strike down similar age verification and content blocking laws, just as they did 20 years ago. Lawmakers can and should avoid this pre-determined fight and focus on passing laws that will have a lasting impact: strong, well-written comprehensive data privacy.

❌
❌