Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 4 avril 2025Flux principal

State AGs Must Act: EFF Expands Call to Investigate Crisis Pregnancy Centers

Back in January, EFF called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable crisis pregnancy centers (CPCs) that engage in deceptive practices. Since then, some of these centers have begun to change their websites, quietly removing misleading language and privacy claims; the Hawaii legislature is considering a bill calling on the attorney general to investigate CPCs in the state, and legislators in Georgia have introduced a slate of bills to tackle deceptive CPC practices.

But there is much more to do. Today, we’re expanding our call to attorneys general in Tennessee, Oklahoma, Nebraska, and North Carolina, urging them to investigate the centers in their states.

Many CPCs have been operating under a veil of misleading promises for years—suggesting that clients’ personal health data is protected under HIPAA, even though numerous reports suggest otherwise; that privacy policies are not followed consistently, and that clients' personal data may be shared across networks without appropriate consent. For example, in a case in Louisiana, we saw firsthand how a CPC inadvertently exposed personal data from multiple clients in a software training video. This kind of error not only violates individuals’ privacy but could also lead to emotional and psychological harm for individuals who trusted these centers with their sensitive information.

We list multiple examples from CPCs in each of the states that claim to comply with HIPAA in our letters to Attorneys General Hilgers, Jackson, Drummond, and Skrmetti. Those include:

  • Gateway Women’s Care in North Carolina claims that “we hold your right to confidentiality with the utmost care and respect and comply with HIPAA privacy standards, which protect your personal and health information” in a blog post titled “Is My Visit Confidential?” Gateway Women’s Care received $56,514 in government grants in 2023. 
  • Assure Women’s Center in Nebraska stresses that it is “HIPAA compliant!” in a blog post that expressly urges people to visit them “before your doctor.”

As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices.

It’s time for state and federal leaders to reassess how public funds are allocated to these centers. Our elected officials are responsible for ensuring that personal information, especially our sensitive medical data, is protected. After all, no one should have to choose between their healthcare and their privacy.

Hawaii Takes a Stand for Privacy: HCR 144/HR 138 Calls for Investigation of Crisis Pregnancy Centers

In a bold push for medical privacy, Hawaii's House of Representatives has introduced HCR 144/HR 138, a resolution calling for the Hawaii Attorney General to investigate whether crisis pregnancy centers (CPCs) are violating patient privacy laws. 

Often referred to as "fake clinics" or “unregulated pregnancy centers” (UPCs), these are non-medical centers that provide  free pregnancy tests and counseling, but typically do not offer essential reproductive care like abortion or contraception. In Hawaii, these centers outnumber actual clinics offering abortion and reproductive healthcare. In fact, the first CPC in the United States was opened in Hawaii in 1967 by Robert Pearson, who then founded the Pearson Foundation, a St. Louis-based organization to assist local groups in setting up unregulated crisis pregnancy centers. 

EFF has called on state AGs to investigate CPCs across the country. In particular, we are concerned that many centers have misrepresented their privacy practices, including suggesting that patient information is protected by HIPAA when it may not be. In January, EFF contacted attorneys general in Florida, Texas, Arkansas, and Missouri asking them to identify and hold accountable CPCs that engage in deceptive practices.

Rep. Kapela’s resolution specifically references EFF’s call on state Attorneys General. It reads:

“WHEREAS, the Electronic Frontiers Foundation, an international digital rights nonprofit that promotes internet civil liberties, has called on states to investigate whether crisis pregnancy centers are complying with patient privacy regulations with regard to the retention and use of collected patient data.” 

HCR 144/HR 138 underscores the need to ensure that healthcare providers handle personal data, particularly medical data, securely and transparently.. Along with EFF’s letters to state AGs, the resolution refers to the increasing body of research on the topic, such as: 

  • A 2024 Healthcare Management Associates Study showed that CPCs received $400 million in federal funding between 2017 and 2023, with little oversight from regulators.
  • A Health Affairs article from November 2024 titled "Addressing the HIPAA Blind Spot for Crisis Pregnancy Centers" noted that crisis pregnancy centers often invoke the Health Insurance Portability and Accountability Act (HIPAA) to collect personal information from clients.

Regardless of one's stance on reproductive healthcare, there is one principle that should be universally accepted: the right to privacy. As HCR 144/HR 138 moves forward, it is imperative that Hawaii's Attorney General investigate whether CPCs are complying with privacy regulations and take action, if necessary, to protect the privacy rights of individuals seeking reproductive healthcare in Hawaii. 

Without comprehensive privacy laws that offer individuals a private right of action, state authorities must be the front line in safeguarding the privacy of their constituents. As we continue to advocate for stronger privacy protections nationwide, we encourage lawmakers and advocates in other states to follow Hawaii's lead and take action to protect the medical privacy rights of all of their constituents.

First Porn, Now Skin Cream? ‘Age Verification’ Bills Are Out of Control

I’m old enough to remember when age verification bills were pitched as a way to ‘save the kids from porn’ and shield them from other vague dangers lurking in the digital world (like…“the transgender”). We have long cautioned about the dangers of these laws, and pointed out why they are likely to fail. While they may be well-intentioned, the growing proliferation of age verification schemes poses serious risks to all of our digital freedoms.

Fast forward a few years, and these laws have morphed into something else entirely—unfortunately, something we expected. What started as a misguided attempt to protect minors from "explicit" content online has spiraled into a tangled mess of privacy-invasive surveillance schemes affecting skincare products, dating apps, and even diet pills, threatening everyone’s right to privacy.

Age Verification Laws: A Backdoor to Surveillance

Age verification laws do far more than ‘protect children online’—they require the  creation of a system that collects vast amounts of personal information from everyone. Instead of making the internet safer for children, these laws force all users—regardless of age—to verify their identity just to access basic content or products. This isn't a mistake; it's a deliberate strategy. As one sponsor of age verification bills in Alabama admitted, "I knew the tough nut to crack that social media would be, so I said, ‘Take first one bite at it through pornography, and the next session, once that got passed, then go and work on the social media issue.’” In other words, they recognized that targeting porn would be an easier way to introduce these age verification systems, knowing it would be more emotionally charged and easier to pass. This is just the beginning of a broader surveillance system disguised as a safety measure.

This alarming trend is already clear, with the growing creep of age verification bills filed in the first month of the 2025-2026 state legislative session. Consider these three bills: 

  1. Skincare: AB-728 in California
    Age verification just hit the skincare aisle! California’s AB-728 mandates age verification for anyone purchasing skin care products or cosmetics that contain certain chemicals like Vitamin A or alpha hydroxy acids. On the surface, this may seem harmless—who doesn't want to ensure that minors are safe from harmful chemicals? But the real issue lies in the invasive surveillance it mandates. A person simply trying to buy face cream could be forced to submit sensitive personal data through “an age verification system,” creating a system of constant tracking and data collection for a product that should be innocuous.
  2. Dating Apps: A3323 in New York
    Match made in heaven? Not without your government-issued ID. New York’s A3323 bill mandates that online dating services verify users’ age, identity, and location before allowing access to their platforms. The bill's sweeping requirements introduce serious privacy concerns for all users. By forcing users to provide sensitive personal information—such as government-issued IDs and location data—the bill creates significant risks that this data could be misused, sold, or exposed through data breaches. 
  3. Dieting products: SB 5622 in Washington State
    Shed your privacy before you shed those pounds! Washington State’s SB 5622 takes aim at diet pills and dietary supplements by restricting their sale to anyone under 18. While the bill’s intention is to protect young people from potentially harmful dieting products, it misses the mark by overlooking the massive privacy risks associated with the age verification process for everyone else. To enforce this restriction, the bill requires intrusive personal data collection for purchasing diet pills in person or online, opening the door for sensitive information to be exploited.

The Problem with Age Verification: No Solution Is Safe

Let’s be clear: no method of age verification is both privacy-protective and entirely accurate. The methods also don’t fall on a neat spectrum of “more safe” to “less safe.” Instead, every form of age verification is better described as “dangerous in one way” or “dangerous in a different way.” These systems are inherently flawed, and none come without trade-offs. Additionally, they continue to burden adults who just want to browse the internet or buy everyday items without being subjected to mass data collection.

For example, when an age verification system requires users to submit government-issued identification or a scan of their face, it collects a staggering amount of sensitive, often immutable, biometric or other personal data—jeopardizing internet users’ privacy and security. Systems that rely on credit card information, phone numbers, or other third-party material  similarly amass troves of personal data. This data is just as susceptible to being misused as any other data, creating vulnerabilities for identity theft and data breaches. These issues are not just theoretical: age verification companies can be—and already have been—hacked. These are real, ongoing concerns for anyone who values their privacy. 

We must push back against age verification bills that create surveillance systems and undermine our civil liberties, and we must be clear-eyed about the dangers posed by these expanding age verification laws. While the intent to protect children makes sense, the unintended consequence is a massive erosion of privacy, security, and free expression online for everyone. Rather than focusing on restrictive age verification systems, lawmakers should explore better, less invasive ways to protect everyone online—methods that don’t place the entire burden of risk on individuals or threaten their fundamental rights. 

EFF will continue to advocate for digital privacy, security, and free expression. We urge legislators to prioritize solutions that uphold these essential values, ensuring that the internet remains a space for learning, connecting, and creating—without the constant threat of surveillance or censorship. Whether you’re buying a face cream, swiping on a dating app, or browsing for a bottle of diet pills, age verification laws undermine that vision, and we must do better.

New Yorkers Deserve Stronger Health Data Protections Now—Governor Hochul Can Make It Happen

25 février 2025 à 11:34

With the rise of digital surveillance, securing our health data is no longer just a privacy issue—it's a matter of personal safety. In the wake of the Supreme Court's reversal of Roe v. Wade and the growing restrictions on abortion and gender-affirming care, protecting our personal health data has never been more important. And in a world where nearly half of U.S. states have either banned or are on the brink of banning abortion, unfettered access to personal health data is an even more dangerous threat.

That’s why EFF joins the New York Civil Liberties Union (NYCLU) in urging Governor Hochul to sign the New York Health Information Privacy Act (A.2141/S.929). This legislation is a crucial step toward safeguarding the digital privacy of New Yorkers at a time when health data is increasingly vulnerable to misuse.

Why Health Data Privacy Matters

When individuals seek reproductive health care or gender-affirming care, they leave behind a digital trail. Whether through search histories, email exchanges, travel itineraries, or data from period-tracker apps and smartwatches, every click, every action, and every step is tracked, often with little or no consent. And this kind of data—however collected—has already been used to criminalize individuals who were simply seeking health care

Unlike HIPAA, which regulates 'covered entities'—providers of treatment, payors/insurers—who are part of the traditional health care system and their ‘business associates,’ this bill would expand its reach to cover a broad range of 'new' entities. These include data brokers, tech companies, and others in the digital ecosystem, who can access and share this sensitive health information. The result is a growing web of entities collecting personal data, far beyond the scope of traditional health care providers.

For example, in some states, individuals have been investigated or even prosecuted based on their digital data, simply for obtaining abortion care. In a world where our health choices are increasingly monitored, the need for robust privacy protections is clearer than ever. The New York Health Information Privacy Act is the Empire State’s opportunity to lead the nation in protecting its residents.

What Does the Health Information Privacy Act Do?

At its core, the New York Health Information Privacy Act would provide vital protections for New Yorkers' electronic health data. Here’s what the bill does:

  • Prohibits the sale of health data: Health data is not a commodity to be bought and sold. This bill ensures that your most personal information is not used for profit by commercial entities without your consent.
  • Requires explicit consent: Before health data is processed, New Yorkers will need to provide clear, informed consent. The bill limits processing (storing, collecting, using) of personal data to “strictly necessary” purposes only, minimizing unnecessary collection.
  • Data deletion rights: Health data will be deleted by default after 60 days, unless the individual requests otherwise. This empowers individuals to control their data, ensuring that unnecessary information doesn’t linger.
  • Non-discrimination protections: Individuals will not face discrimination or higher costs for exercising their privacy rights. No one should be penalized for wanting to protect their personal information.

Why New York Needs This Bill Now

The need for these protections is urgent. As digital surveillance expands, so does the risk of personal health data being used against individuals. In a time when personal health decisions are under attack, it’s crucial that New Yorkers have control over their health information. By signing this bill, Governor Hochul would ensure that out-of-state actors cannot easily access New Yorkers’ health data without due process, protecting individuals from legal actions in states that criminalize reproductive and gender-affirming care.

However, this bill still faces a critical shortcoming—the absence of a private right of action (PRA). Without it, individuals cannot directly sue companies for privacy violations, leaving them vulnerable. Accountability would fall solely on the Attorney General, who would need the resources to quickly and consistently enforce the new law. Nonetheless, the Attorney General’s role will now be critical in ensuring this bill is upheld, and they must remain steadfast in implementing these protections effectively.

Governor Hochul: Sign A.2141/S.929

The importance of this legislation cannot be overstated—it is about protecting people from potential legal actions related to their health care decisions. By signing this bill, Governor Hochul would solidify New York’s position as a leader in health data privacy and take a firm stand against the misuse of personal information.

New York has the power to protect its residents and set a strong precedent for privacy protections across the nation. Let’s ensure that personal health data remains in the hands of those who own it—the individuals themselves.

Governor Hochul: This is your chance to make a difference. Let’s take action now to protect what matters most—our health, our data, and our rights. Sign A.2141/ S.929 today.

À partir d’avant-hierFlux principal

The Impact of Age Verification Measures Goes Beyond Porn Sites

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

VPNs Are Not a Solution to Age Verification Laws

VPNs are having a moment. 

On January 1st, Florida joined 18 other states in implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub blocked access to users in Florida. Residents in the “Free State of Florida” have now lost access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.

At the same time, Google Trends data showed a spike in searches for VPN access across Florida–presumably because users are trying to access the site via VPNs.  

How Did This Happen?

Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access “adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat. 

Despite experts from across civil society flagging concerns about the impact of these laws on both adults’ and children’s rights, politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3

HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age verification requirements face civil penalties and could be subject to lawsuits from the state. 

Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access PORNHUB?” 

Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which could lead to potential breaches and misuse of that information. In a statement to local news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at risk:

Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law. Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.

The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance online. 

How Do VPNs Play a Role? 

Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. 

A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.

With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us all—including the young people they are designed to protect. 

What Can You Do?

If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.

No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms

Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom Foundation, and others.

State Legislatures Are The Frontline for Tech Policy: 2024 in Review

State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.

This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.

Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.

EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.

States Continue to Experiment

Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.

For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.

We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.

On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.

Looking Ahead to 2025

2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms. 

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Triumphs, Trials, and Tangles From California's 2024 Legislative Session

California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this year.

EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.

Celebrating Our Victories

This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.

First, we commend the California Legislature for not advancing A.B. 3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. We understand Asm. Alanis' concerns, but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.

Second, EFF worked with dozens of organizations to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.  Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read more about this bill here, and we are glad to see the California legislature reject this dangerous bill.

EFF also worked to oppose and defeat S.B. 1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076 would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it. 

Lastly, EFF opposed S.B. 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act authored by Senator Scott Wiener (San Francisco). This bill aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.

Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.  

Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:

The Lost Promise of A.B. 3048

Throughout this session, EFF and our partners supported A.B. 3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their privacy rights without creating burdensome requirements for developers or hindering innovation.

Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process. But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy rights.

This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their rights. 

Wrong Solutions to Real Problems

A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included bills intended to address young people’s safety online and deepfakes in elections.

While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online, and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms. 

Similarly, A.B. 2839, authored by Assemblymember Gail Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers, etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.

There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they were signed, a federal judge put A.B. 2839’s enforcement on pause (via a preliminary injunction) on First Amendment grounds.

Privacy Risks in State Databases

We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways, threatened to weaken privacy protections within state databases. Specifically,  A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data privacy.

A.B. 518 authorizes numerous agencies in California to share, without restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this personal information confidential, and no restrictions on what DSS can do with the information.

On the other hand, A.B. 2723 assigns a governing board to the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:

  1.     the right for subjects whose information is kept in the data system to receive notice their data is in the system;
  2.     the right to consent or, more meaningfully, to withhold consent;
  3.     and the right to request correction of erroneous information.

By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first place. 

Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.

A Call for Comprehensive Local Protections

A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be addressed.

Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.

Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.

Preemption Playbook: Big Tech’s Blueprint Comes Straight from Big Tobacco

16 octobre 2024 à 16:53

Big Tech is borrowing a page from Big Tobacco's playbook to wage war on your privacy, according to Jake Snow of the ACLU of Northern California. We agree.  

In the 1990s, the tobacco industry attempted to use federal law to override a broad swath of existing state laws and prevent states from future action on those areas. For Big Tobacco, it was the “Accommodation Program,” a national campaign ultimately aimed to override state indoor smoking laws with weaker federal law. Big Tech is now attempting this with federal privacy bills, like the American Privacy Rights Act (APRA), that would preempt many state privacy laws.  

In “Big Tech is Trying to Burn Privacy to the Ground–And They’re Using Big Tobacco’s Strategy to Do It,” Snow outlines a three-step process that both industries have used to weaken state laws. Faced with a public relations crisis, the industries:

  1. Muddy the waters by introducing various weak bills in different states.
  2. Complain that they are too confusing to comply with, 
  3. Ask for “preemption” of grassroots efforts.

“Preemption” is a legal doctrine that allows a higher level of government to supersede the power of a lower level of government (for example, a federal law can preempt a state law, and a state law can preempt a city or county ordinance).  

EFF has a clear position on this: we oppose federal privacy laws that preempt current and future state privacy protections, especially by a lower federal standard.  

Congress should set a nationwide baseline for privacy, but should not take away states' ability to react in the future to current and unforeseen problems. Earlier this year, EFF joined ACLU and dozens of digital and human rights organizations in opposing APRA’s preemption sections. The letter points out that, "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling.” EFF led a similar coalition effort in 2018.  

Companies that collect and use our data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy. But many existing federal laws concerning privacy, civil rights, and more operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. Complaints of this “patchwork” have long been a part of the strategy for both Big Tech and Big Tobacco.  

States have long been the “laboratories of democracy” and have led the way in the development of innovative privacy legislation. Because of this, federal laws should establish a floor and not a ceiling, particularly as new challenges rapidly emerge. Preemption would leave consumers with inadequate protections, and make them worse off than they would be in the absence of federal legislation.  

Congress never preempted states' authority to enact anti-smoking laws, despite Big Tobacco’s strenuous efforts. So there is hope that Big Tech won’t be able to preempt state privacy law, either. EFF will continue advocating against preemption to ensure that states can protect their citizens effectively. 

Read Jake Snow’s article here.

❌
❌