Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

The Human Toll of ALPR Errors

1 novembre 2024 à 23:17

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

Americans Are Uncomfortable with Automated Decision-Making

Imagine a company you recently applied to work at used an artificial intelligence program to analyze your application to help expedite the review process. Does that creep you out? Well, you’re not alone.

Consumer Reports recently released a national survey finding that Americans are uncomfortable with use of artificial intelligence (AI) and algorithmic decision-making in their day to day lives. The survey of 2,022 U.S. adults was administered by NORC at the University of Chicago and examined public attitudes on a variety of issues. Consumer Reports found:

  • Nearly three-quarters of respondents (72%) said they would be “uncomfortable”— including nearly half (45%) who said they would be “very uncomfortable”—with a job interview process that allowed AI to screen their interview by grading their responses and in some cases facial movements.
  • About two-thirds said they would be “uncomfortable”— including about four in ten (39%) who said they would be “very uncomfortable”— allowing banks to use such programs to determine if they were qualified for a loan or allowing landlords to use such programs to screen them as a potential tenant.
  • More than half said they would be “uncomfortable”— including about a third who said they would be “very uncomfortable"— with video surveillance systems using facial recognition to identity them, and with hospital systems using AI or algorithms to help with diagnosis and treatment planning.

The survey findings indicate that people are feeling disempowered by lost control over their digital footprint, and by corporations and government agencies adopting AI technology to make life-altering decisions about them. Yet states are moving at breakneck speed to implement AI “solutions” without first creating meaningful guidelines to address these reasonable concerns. In California, Governor Newsom issued an executive order to address government use of AI, and recently granted five vendors approval to test and AI for a myriad of state agencies. The administration hopes to apply AI to such topics as health-care facility inspections, assisting residents who are not fluent in English, and customer service.

The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them.  Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.

As states explore how to best protect consumers as corporations and government agencies deploy algorithmic decision-making, EFF urges strict standards of transparency and accountability. Laws should have a “privacy first” approach that ensures people have a say in how their private data is used. At a minimum, people should have a right to access what data is being used to make decisions about them and have the opportunity to correct it. Likewise, agencies and businesses using automated decision-making should offer an appeal process. Governments should ensure that consumers have protections from discrimination in algorithmic decision-making by both corporations and the public sector. Another priority should be a complete ban on many government uses of automated decision-making, including predictive policing.

From deciding who gets housing or the best mortgages, who gets an interview or a job, or who law enforcement or ICE investigates, people are uncomfortable with algorithmic decision-making that will affect their freedoms. Now is the time for strong legal protections.

Court to California: Try a Privacy Law, Not Online Censorship

In a victory for free speech and privacy, a federal appellate court confirmed last week that parts of the California Age-Appropriate Design Code Act likely violate the First Amendment, and that other parts require further review by the lower court.

The U.S. Court of Appeals for the Ninth Circuit correctly rejected rules requiring online businesses to opine on whether the content they host is “harmful” to children, and then to mitigate such harms. EFF and CDT filed a friend-of-the-court brief in the case earlier this year arguing for this point.

The court also provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the Act’s age-verification provision. We will continue to argue, in this case and others, that this provision violates the First Amendment rights of children and adults.

The Act, The Rulings, and Our Amicus Brief

In 2022, California enacted its Age-Appropriate Design Code Act (AADC). Three of the law’s provisions are crucial for understanding the court’s ruling.

  1. The Act requires an online business to write a “Data Protection Impact Assessment” for each of its features that children are likely to access. It must also address whether the feature’s design could, among other things, “expos[e] children to harmful, or potentially harmful, content.” Then the business must create a “plan to mitigate” that risk.
  1. The Act requires online businesses to follow enumerated data privacy rules specific to children. These include data minimization, and limits on processing precise geolocation data.
  1. The Act requires online businesses to “estimate the age of child users,” to an extent proportionate to the risks arising from the business’s data practices, or to apply child data privacy rules to all consumers.

In 2023, a federal district court blocked the law, ruling that it likely violates the First Amendment. The state appealed.

EFF’s brief in support of the district court’s ruling argued that the Act’s age-verification provision and vague “harmful” standard are unconstitutional; that these provisions cannot be severed from the rest of the Act; and thus that the entire Act should be struck down. We conditionally argued that if the court rejected our initial severability argument, privacy principles in the Act could survive the reduced judicial scrutiny applied to such laws and still safeguard peoples personal information. This is especially true given the government’s many substantial interests in protecting data privacy.

The Ninth Circuit affirmed the preliminary injunction as to the Act’s Impact Assessment provisions, explaining that they likely violate the First Amendment on their face. The appeals court vacated the preliminary injunction as to the Act’s other provisions, reasoning that the lower court had not applied the correct legal tests. The appeals court sent the case back to the lower court to do so.

Good News: No Online Censorship

The Ninth Circuit’s decision to prevent enforcement of the AADC’s impact assessments on First Amendment grounds is a victory for internet users of all ages because it ensures everyone can continue to access and disseminate lawful speech online.

The AADC’s central provisions would have required a diverse array of online services—from social media to news sites—to review the content on their sites and consider whether children might view or receive harmful information. EFF argued that this provision imposed content-based restrictions on what speech services could host online and was so vague that it could reach lawful speech that is upsetting, including news about current events.

The Ninth Circuit agreed with EFF that the AADC’s “harmful to minors” standard was vague and likely violated the First Amendment for several reasons, including because it “deputizes covered businesses into serving as censors for the State.”

The court ruled that these AADC censorship provisions were subject to the highest form of First Amendment scrutiny because they restricted content online, a point EFF argued. The court rejected California’s argument that the provisions should be subjected to reduced scrutiny under the First Amendment because they sought to regulate commercial transactions.

“There should be no doubt that the speech children might encounter online while using covered businesses’ services is not mere commercial speech,” the court wrote.

Finally, the court ruled that the AADC’s censorship provisions likely failed under the First Amendment because they are not narrowly tailored and California has less speech-restrictive ways to protect children online.

EFF is pleased that the court saw AADC’s impact assessment requirements for the speech restrictions that they are. With those provisions preliminarily enjoined, everyone can continue to access important, lawful speech online.

More Good News: A Roadmap for Privacy-First Laws

The appeals court did not rule on whether the Act’s data privacy provisions could survive First Amendment review. Instead, it directed the lower court in the first instance to apply the correct tests.

In doing so, the appeals court provided guideposts for how legislatures can write data privacy laws that survive First Amendment review. Spoiler alert: enact a “privacy first” law, without unlawful censorship provisions.

Dark patterns. Some privacy laws prohibit user interfaces that have the intent or substantial effect of impairing autonomy and choice. The appeals court reversed the preliminary injunction against the Act’s dark patterns provision, because it is unclear whether dark patterns are even protected speech, and if so, what level of scrutiny they would face.

Clarity. Some privacy laws require businesses to use clear language in their published privacy policies. The appeals court reversed the preliminary injunction against the Act’s clarity provision, because there wasn’t enough evidence to say whether the provision would run afoul of the First Amendment. Indeed, “many” applications will involve “purely factual and non-controversial” speech that could survive review.

Transparency. Some privacy laws require businesses to disclose information about their data processing practices. In rejecting the Act’s Impact Assessments, the appeals court rejected an analogy to the California Consumer Privacy Act’s unproblematic requirement that large data processors annually report metrics about consumer requests to access, correct, and delete their data. Likewise, the court reserved judgment on the constitutionality of two of the Act’s own “more limited” reporting requirements, which did not require businesses to opine on whether third-party content is “harmful” to children.

Social media. Many privacy laws apply to social media companies. While the EFF is second-to-none in defending the First Amendment right to moderate content, we nonetheless welcome the appeals court’s rejection of the lower court’s “speculat[ion]” that the Act’s privacy provisions “would ultimately curtail the editorial decisions of social media companies.” Some right-to-curate allegations against privacy laws might best be resolved with “as-applied claims” in specific contexts, instead of on their face.

Ninth Circuit Punts on the AADC’s Age-Verification Provision

The appellate court left open an important issue for the trial court to take up: whether the AADC’s age-verification provision violates the First Amendment rights of adults and children by blocking them from lawful speech, frustrating their ability to remain anonymous online, and chilling their speech to avoid danger of losing their online privacy.

EFF also argued in our Ninth Circuit brief that the AADC’s age-verification provision was similar to many other laws that courts have repeatedly found to violate the First Amendment.

The Ninth Circuit missed a great opportunity to confirm that the AADC’s age-verification provision violated the First Amendment. The court didn’t pass judgment on the provision, but rather ruled that the district court had failed to adequately assess the provision to determine whether it violated the First Amendment on its face.

As EFF’s brief argued, the AADC’s age-estimation provision is pernicious because it restricts everyone’s access to lawful speech online, by requiring adults to show proof that they are old enough to access lawful content the AADC deems harmful.

We look forward to the district court recognizing the constitutional flaws of the AADC’s age-verification provision once the issue is back before it.

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 mars 2024 à 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Sen. Wyden Exposes Data Brokers Selling Location Data to Anti-Abortion Groups That Target Abortion Seekers

27 février 2024 à 19:58

This post was written by Jack Beck, an EFF legal intern

In a recent letter to the FTC and SEC, Sen. Ron Wyden (OR) details new information on data broker Near, which sold the location data of people seeking reproductive healthcare to anti-abortion groups. Near enabled these groups to send targeted ads promoting anti-abortion content to people who had visited Planned Parenthood and similar clinics.

In May 2023, the Wall Street Journal reported that Near was selling location data to anti-abortion groups. Specifically, the Journal found that the Veritas Society, a non-profit established by Wisconsin Right to Life, had hired ad agency Recrue Media. That agency purchased location data from Near and used it to target anti-abortion messaging at people who had sought reproductive healthcare.

The Veritas Society detailed the operation on its website (on a page that was taken down but saved by the Internet Archive) and stated that it delivered over 14 million ads to people who visited reproductive healthcare clinics. These ads appeared on Facebook, Instagram, Snapchat, and other social media for people who had sought reproductive healthcare.

When contacted by Sen. Wyden’s investigative team, Recrue staff admitted that the agency used Near’s website to literally “draw a line” around areas their client wanted them to target. They drew these lines around reproductive health care facilities across the country, using location data purchased from Near to target visitors to 600 Planned Parenthood different locations. Sen. Wyden’s team also confirmed with Near that, until the summer of 2022, no safeguards were in place to protect the data privacy of people visiting sensitive places.

Moreover, as Sen. Wyden explains in his letter, Near was selling data to the government, though it claimed on its website to be doing no such thing. As of October 18, 2023, Sen. Wyden’s investigation found Near was still selling location data harvested from Americans without their informed consent.

Near’s invasion of our privacy shows why Congress and the states must enact privacy-first legislation that limits how corporations collect and monetize our data. We also need privacy statutes that prevent the government from sidestepping the Fourth Amendment by purchasing location information—as Sen. Wyden has proposed. Even the government admits this is a problem.  Furthermore, as Near’s misconduct illustrates, safeguards must be in place that protect people in sensitive locations from being tracked.

This isn’t the first time we’ve seen data brokers sell information that can reveal visits to abortion clinics. We need laws now to strengthen privacy protections for consumers. We thank Sen. Wyden for conducting this investigation. We also commend the FTC’s recent bar on a data broker selling sensitive location data. We hope this represents the start of a longstanding trend.

FTC Bars X-Mode from Selling Sensitive Location Data

23 janvier 2024 à 18:51

Update, January 23, 2024: Another week, another win! The FTC announced a successful enforcement action against another location data broker, InMarket.

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals. The company’s customers include marketers and government contractors.

The FTC’s proposed order contains a strong set of rules to protect the public from this company.

General rules for all location data:

  • X-Mode cannot collect, use, maintain, or disclose a person’s location data absent their opt-in consent. This includes location data the company collected in the past.
  • The order defines “location data” as any data that may reveal the precise location of a person or their mobile device, including from GPS, cell towers, WiFi, and Bluetooth.
  • X-Mode must adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.
  • X-Mode must, on request of a person, delete their location data, and inform them of every entity that received their location data.

Heightened rules for sensitive location data:

  • X-Mode cannot sell, disclose, or use any “sensitive” location data.
  • The order defines “sensitive” locations to include medical facilities (such as family planning centers), religious institutions, union offices, schools, shelters for domestic violence survivors, and immigrant services.
  • To implement this rule, the company must develop a comprehensive list of sensitive locations.
  • However, X-Mode can use sensitive location data if it has a direct relationship with a person related to that data, the person provides opt-in consent, and the company uses the data to provide a service the person directly requested.

As the FTC Chair and Commissioners explain in a statement accompanying this order’s announcement:

The explosion of business models that monetize people’s personal information has resulted in routine trafficking and marketing of Americans’ location data. As the FTC has stated, openly selling a person’s location data the highest bidder can expose people to harassment, stigma, discrimination, or even physical violence. And, as a federal court recently recognized, an invasion of privacy alone can constitute “substantial injury” in violation of the law, even if that privacy invasion does not lead to further or secondary harm.

X-Mode has disputed the implications of the FTC’s statements regarding the settlement, and asserted that the FTC did not find an instance of data misuse.

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.” The FTC has laid out a powerful case that X-Mode’s brokering of location data is unfair and thus unlawful.

The FTC’s enforcement action against X-Mode sends a strong signal that other location data brokers should take a hard look at their own business model or risk similar legal consequences.

The FTC has recently taken many other welcome actions to protect data privacy from corporate surveillance. In 2023, the agency limited Rite Aid’s use of face recognition, and fined Amazon’s Ring for failing to secure its customers’ data. In 2022, the agency brought an unfair business practices claim against another location data broker, Kochava, and began exploring issuance of new rules against commercial data surveillance.

Is Your State’s Child Safety Law Unconstitutional? Try Comprehensive Data Privacy Instead

Comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harm they do to children. Well-written privacy legislation has the added benefit of being constitutional—unlike the flurry of laws that restrict content behind age verification requirements that courts have recently blocked. Such misguided laws do little to protect kids while doing much to invade everyone’s privacy and speech.

Courts have issued preliminary injunctions blocking laws in Arkansas, California, and Texas because they likely violate the First Amendment rights of all internet users. EFF has warned that such laws were bad policy and would not withstand court challenges. Nonetheless, different iterations of these child safety proposals continue to be pushed at the state and federal level.

The answer is to re-focus attention on comprehensive data privacy legislation, which would address the massive collection and processing of personal data that is the root cause of many problems online. Just as important, it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

It Is Comparatively Easy to Write Data Privacy Laws That Are Constitutional

EFF has long pushed for strong comprehensive commercial data privacy legislation and continues to do so. Data privacy legislation has many components. But at its core, it should minimize the amount of personal data that companies process, give users certain rights to control their personal data, and allow consumers to sue when the law is violated.

EFF has argued that privacy laws pass First Amendment muster when they have a few features that ensure the law reasonably fits its purpose. First, they regulate the commercial processing of personal data. Second, they do not impermissibly restrict the truthful publication of matters of public concern. And finally, the government’s interest and law’s purpose is to protect data privacy; expand the free expression that privacy enables; and protect the security of data against insider threats, hacks, and eventual government surveillance. If so, the privacy law will be constitutional if the government shows a close fit between the law’s goals and its means.

EFF made this argument in support of the Illinois Biometric Information Privacy Act (BIPA), and a law in Maine that limits the use and disclosure of personal data collected by internet service providers. BIPA, in particular, has proved wildly important to biometric privacy. For example, it led to a settlement that prohibits the company Clearview AI from selling its biometric surveillance services to law enforcement in the state. Another settlement required Facebook to pay hundreds of millions of dollars for its policy (since repealed) of extracting faceprints from users without their consent.

Courts have agreed. Privacy laws that have been upheld under the First Amendment, or cited favorably by courts, include those that regulate biometric data, health data, credit reports, broadband usage data, phone call records, and purely private conversations.

The Supreme Court, for example, has cited the federal 1996 Health Insurance Portability and Accountability Act (HIPAA) as an example of a “coherent” privacy law, even when it struck down a state law that targeted particular speakers and viewpoints. Additionally, when evaluating the federal Wiretap Act, the Supreme Court correctly held that the law cannot be used to prevent a person from publishing legally obtained communications on matters of public concern. But it otherwise left in place the wiretap restrictions that date back to 1934, designed to protect the confidentiality of private conversations.

It Is Nearly Impossible to Write Age Verification Requirements That Are Constitutional. Just Ask Arkansas, California, and Texas

Federal Courts have recently granted preliminary injunctions that block laws in Arkansas, California, and Texas from going into effect because they likely violate the First Amendment rights of all internet users. While the laws differ from each other, they all require (or strongly incentivize) age verification for all internet users.

The Arkansas law requires age verification for users of certain social media companies, which EFF strongly opposes, and bans minors from those services without parental consent. The court blocked it. The court reasoned that the age verification requirement would deter everyone from accessing constitutionally protected speech and burden anonymous speech. EFF and ACLU filed an amicus brief against this Arkansas law.

In California, a federal court recently blocked the state’s Age-Appropriate Design Code (AADC) under the First Amendment. Significantly, the AADC strongly incentivized websites to require users to verify their age. The court correctly found that age estimation is likely to “exacerbate” the problem of child security because it requires everyone “to divulge additional personal information” to verify their age. The court blocked the entire law, even some privacy provisions we’d like to see in a comprehensive privacy law if they were not intertwined with content limitations and age-gating. EFF does not agree with the court’s reasoning in its entirety because it undervalued the state’s legitimate interest in and means of protecting people’s privacy online. Nonetheless, EFF originally asked the California governor to veto this law, believing that true data privacy legislation has nothing to do with access restrictions.

The Texas law requires age verification for users of websites that post sexual material, and exclusion of minors. The law also requires warnings about sexual content that the court found unsupported by evidence. The court held both provisions are likely unconstitutional. It explained that the age verification requirement, in particular, is “constitutionally problematic because it deters adults’ access to legal sexually explicit material, far beyond the interest of protecting minors.” EFF, ACLU, and other groups filed an amicus brief against this Texas law.

Support Comprehensive Privacy Legislation That Will Stand the Test of Time

Courts will rightly continue to strike down similar age verification and content blocking laws, just as they did 20 years ago. Lawmakers can and should avoid this pre-determined fight and focus on passing laws that will have a lasting impact: strong, well-written comprehensive data privacy.

❌
❌