Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Calyx Institute: A Case Study in Grassroots Innovation

Par : Rory Mir
3 avril 2025 à 09:36

Technologists play a huge role in building alternative tools and resources when our right to privacy and security are undermined by governments and major corporations. This direct resistance ensures that even in the face of powerful adversaries, communities can find some safety and autonomy through community-built tools.

One of the most renowned names in this work is the Calyx Institute, a New York based 501(c)3 nonprofit founded by Nicholas Merrill, after a successful and influential constitutional challenge to the National Security Letter (NSL) statute in the USA Patriot Act. Today Calyx’s mission is to defend digital privacy, advance connectivity, and strive for a future where everyone has access to the resources and tools they need to remain securely connected. Their work is made possible thanks to the generous donations of their over 12,000 grassroots members.

More recently, Calyx joined EFF’s network of grassroots organizations across the US, the Electronic Frontier Alliance (EFA). Members of the alliance are not-for-profit local organizations dedicated to EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Calyx has since been an exceptional ally, lifting up and collaborating with fellow members.

If you’re inspired by Calyx to start making a difference in your community, you can get started with our organizer toolkits. Once you’re ready, we hope you consider applying to join the alliance.

JOIN EFA

Defend Digital Rights Locally

We corresponded with Calyx over email to discuss the group's ambitious work, and what the future holds for Calyx. Here are excerpts from our conversation:

Thanks for chatting with us, to get started could you tell us a bit about Calyx’s current work?

Calyx focuses on three areas: (1) developing a privacy-respecting software ecosystem, (2) bridging the digital divide with affordable internet access, and (3) sustaining our community through grants, and research, and educational initiatives.

We build and maintain a digital ecosystem of free and open-source software (FOSS) centering on CalyxOS, an Android operating system that encrypts communications, combats invasive metadata collection, and protects users from geolocation tracking. The Calyx Internet Membership Program offers mobile hotspots so people have a way to stay connected despite limited resources or a lack of viable alternatives. Finally, Calyx actively engages with diverse stakeholder groups to build a shared understanding of privacy and expand digital-security literacy and provide grants to directly support aligned organizations. By partnering with our peers, funders, and service providers, we hope to drive collective action toward a privacy-and-rights-respecting future of technology.

Calyx projects work with a wide range of technologies. What are some barriers Calyx runs into in this work?

Our biggest challenge is one shared by many tech communities, particularly FOSS advocates: it is difficult to balance privacy and security with usability in tool development. On the one hand, the current data-mining business model of the tech sector makes it extremely hard to provide FOSS solutions to proprietary tech while keeping the tool intuitive and easy to use. On the other, there is a general lack of momentum for funding and growing an alternative digital ecosystem.

As a result, many digital rights enthusiasts are left with scarce resources and a narrow space within which to work on technical solutions. We need more people to work together and collectively advocate for a privacy-respecting tech ecosystem that cares about all communities and does not marginalize anyone.

Take CalyxOS, for example. Before it became a tangible project, our founder Nick spent years thinking about an alternative mobile operating system that put privacy first. Back in 2012, Nick spoke to Moxie Marlinspike, the creator of the Signal messaging app, about his idea. Moxie shared several valid concerns that almost led Nick to stop working on it. Fortunately, these warnings, which came from Moxie’s experience and success with Signal, made Nick even more determined, and he recruited an expert global team to help realize his idea.

What do you see as the role of technologists in defending civil liberties with local communities?

Technologists are enablers—they build tools and technical infrastructures, fundamental parts of the digital ecosystem within which people exercise their rights and enjoy their lives. A healthy digital ecosystem consists of technologies that liberate people. It is an arena where people willingly and actively connect and share their expertise, confident in the shared protocols that protect everyone’s rights and dignity. That is why Calyx builds and advocates for people-centered, privacy-focused FOSS tools.

How has Calyx supported folks in NYC? What have you learned from it?

It’s a real privilege to be part of the NYC tech community, which has such a wealth of technologists, policy experts, human rights watchdogs, and grassroots activists. In recent years, we joined efforts led by multiple networks and organizations to mobilize against unjustifiable mass surveillance and other digital threats faced by millions of people of color, immigrants, and other underrepresented groups.

We’re particularly proud of the support we provided to another EFA member, Surveillance Technology Oversight Project, on the Ban the Scan campaign to ban facial recognition in NYC, and CryptoHarlem to sustain their work bringing digital privacy and cybersecurity education to communities in Harlem and beyond. Most recently, we funded Sunset Spark—a small nonprofit offering free education in science and technology in the heart of Brooklyn—to develop a multipurpose curriculum focused on privacy, internet infrastructure, and the roles of the public and private sectors in our digital world.

These experiences deeply inspired us to shape a funding philosophy that centers the needs of organizations and groups with limited resources, helps local communities break barriers and build capacity, and grows reciprocal relationships between each member of the community.

You mentioned a grantmaking program, which is a really unique project for an EFA member. Could you tell us a bit about your theory of change for the program?

Since 2020, the Calyx Institute has been funding the development of digital privacy and security tools, research on mass surveillance systems, and training efforts to equip people with the knowledge and tools they need to protect their right to privacy and connectivity. In 2022, Calyx launched the Fusion Center Research Fund to aid investigations into law enforcement harvesting of personal data through intelligence-sharing centers. This effort, with nearly $200,000 disbursed to grantees, helped reveal the deleterious impact of surveillance technology on privacy and freedom of expression.

These efforts have led to the Sepal Fund, Calyx’s pilot program to offer small groups unrestricted and holistic grants. This program will provide five organizations, collectives, or projects a yearly grant of up to $50,000 for a total of three years. In addition, we will provide our grantees opportunities for professional development, as well as other resources. Through this program, we hope to sustain and elevate research, tool development, and education that will support digital privacy and defend internet freedom.


Could you tell us a bit about how people can get involved?

All our projects are, at their core, community projects, and we welcome insights and involvement from anyone to whom our work is relevant. CalyxOS offers a variety of ways to connect, including a CalyxOS Matrix room and GitLab repository where users and programmers interact in real time to troubleshoot and discuss improvements. Part of making CalyxOS accessible is ensuring that it’s as widely available as possible, so anyone who would like to be part of that translation and localization effort should visit our weblate site.

What does the future look like for Calyx?

We are hoping that the future holds big things for us, like CalyxOS builds on more affordable and globally available mobile devices so that people in different locations with varied resources can equally enjoy the right to privacy. We are also looking forward to updating our visual communication—we have been “substance over style” for so long that it will be exciting to see how a refreshed look will help us reach new audiences.

Finally, what’s your “moonshot”? What’s the ideal future Calyx wants to build?

The Calyx dream is accessible digital privacy, security, and connectivity for all, regardless of budget or tech background, centering communities that are most in need.

We want a future where everyone has access to the resources and tools they need to remain securely connected. To get there, we’ll need to work on building a lot of capacity, both technological and informational. Great tools can only fulfill their purpose if people know why and how to use them. Creating those tools and spreading the word about them requires collaboration, and we are proud to be working toward that goal alongside all the organizations that make up the EFA.

Our thanks to the Calyx Institute for their continued efforts to build private and secure tools for targeted groups, in New York City and across the globe. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting eff.org/fight.

Online Tracking is Out of Control—Privacy Badger Can Help You Fight Back

Par : Lena Cohen
27 mars 2025 à 17:09

Every time you browse the web, you're being tracked. Most websites contain invisible tracking code that allows companies to collect and monetize data about your online activity. Many of those companies are data brokers, who sell your sensitive information to anyone willing to pay. That’s why EFF created Privacy Badger, a free, open-source browser extension used by millions to fight corporate surveillance and take back control of their data. 

Since we first released Privacy Badger in 2014, online tracking has only gotten more invasive and Privacy Badger has evolved to keep up. Whether this is your first time using it or you’ve had it installed since day one, here’s a primer on how Privacy Badger protects you.

Online Tracking Isn't Just Creepy—It’s Dangerous 

The rampant data collection, sharing, and selling fueled by online tracking has serious consequences. Fraudsters purchase data to identify elderly people susceptible to scams. Government agencies and law enforcement purchase people’s location data and web browsing records without a warrant. Data brokers help predatory companies target people in financial distress. And surveillance companies repackage data into government spy tools.

Once your data enters the data broker ecosystem, it’s nearly impossible to know who buys it and what they’re doing with it. Privacy Badger blocks online tracking to prevent your browsing data from being used against you. 

Privacy Badger Disrupts Surveillance Business Models

Online tracking is pervasive because it’s profitable. Tech companies earn enormous profits by targeting ads based on your online activity—a practice called “online behavioral advertising.” In fact, Big Tech giants like Google, Meta, and Amazon are among the top companies tracking you across the web. By automatically blocking their trackers, Privacy Badger makes it harder for Big Tech companies to profit from your personal information.

Online behavioral advertising has made surveillance the business model of the internet. Companies are incentivized to collect as much of our data as possible, then share it widely through ad networks with no oversight. This not only exposes our sensitive information to bad actors, but also fuels government surveillance. Ending surveillance-based advertising is essential for building a safer, more private web. 

While strong federal privacy legislation is the ideal solution—and one that we continue to advocate for—Privacy Badger gives you a way to take action today. 

Privacy Badger fights for a better web by incentivizing companies to respect your privacy. Privacy Badger sends the Global Privacy Control and Do Not Track signals to tell companies not to track you or share your data. If they ignore these signals, Privacy Badger will block them, whether they are advertisers or trackers of other kinds. By withholding your browsing data from advertisers, data brokers, and Big Tech companies, you can help make online surveillance less profitable. 

How Privacy Badger Protects You From Online Tracking

Whether you're looking to protect your sensitive information from data brokers or simply don’t want Big Tech monetizing your data, Privacy Badger is here to help.

Over the past decade, Privacy Badger has evolved to fight many different methods of online tracking. Here are some of the ways that Privacy Badger protects your data:

  • Blocks Third-Party Trackers and Cookies: Privacy Badger stops tracking code from loading on sites that you visit. That prevents companies from collecting data about your online activity on sites that they don’t own. 
  • Sends the GPC Signal to Opt Out of Data Sharing: Privacy Badger sends the Global Privacy Control (GPC) signal to opt out of websites selling or sharing your personal information. This signal is legally binding in some states, including California, Colorado, and Connecticut. 
  • Stops Social Media Companies From Tracking You Through Embedded Content: Privacy Badger replaces page elements that track you but are potentially useful (like embedded tweets) with click-to-activate placeholders. Social media buttons, comments sections, and video players can send your data to other companies, even if you don’t click on them.
  • Blocks Link Tracking on Google and Facebook: Privacy Badger blocks Google and Facebook’s attempts to follow you whenever you click a link on their websites. Google not only tracks the links you visit from Google Search, but also the links you click on platforms that feel more private, like Google Docs and Gmail
  • Blocks Invasive “Fingerprinting” Trackers: Privacy Badger blocks trackers that try to identify you based on your browser's unique characteristics, a particularly problematic form of tracking called “fingerprinting.” 
  • Automatically learns to block new trackers: Our Badger Swarm research project continuously discovers new trackers for Privacy Badger to block. Trackers are identified based on their behavior, not just human-curated blocklists.
  • Disables Harmful Chrome Settings: Automatically disables Google Chrome settings that are bad for your privacy.
  • Easy to Disable on Individual Sites While Maintaining Protections Everywhere Else: If blocking harmful trackers ends up breaking something on a website, you can disable Privacy Badger for that specific site while maintaining privacy protections everywhere else.

All of these privacy protections work automatically when you install Privacy Badger—there’s no setup required! And it turns out that when Privacy Badger blocks tracking, you’ll also see fewer ads and your pages will load faster. 

A screenshot of Privacy Badger's popup. The popup shows that Privacy Badger blocked 38 trackers on WebMD.

You can always check to see what Privacy Badger has done on the site you’re visiting by clicking on Privacy Badger’s icon in your browser toolbar.

Fight Corporate Surveillance by Spreading the Word About Privacy Badger

Privacy is a team sport. The more people who withhold their data from data brokers and Big Tech companies, the less profitable online surveillance becomes. If you haven’t already, visit privacybadger.org to install Privacy Badger on your web browser. And if you like Privacy Badger, tell your friends about how they can join us in fighting for a better web!

Install Privacy Badger

Podcast Episode Rerelease: Dr. Seuss Warned Us

Par : Josh Richman
23 mars 2025 à 12:42

This episode was first released on May 2, 2023.

We’re excited to announce that we’re working on a new season of How to Fix the Internet, coming in the next few months! But today we want to lift up an earlier episode that has particular significance right now. In 2023, we spoke with our friend Alvaro Bedoya, who was appointed as a Commissioner for the Federal Trade Commission in 2022. In our conversation, we talked about his work there, about why we need to be wary of workplace surveillance, and why it’s so important for everyone that we strengthen our privacy laws. We even talked about Dr. Seuss!

Last week the Trump administration attempted to terminate Alvaro, along with another FTC commissioner, even though Alvaro's appointment doesn't expire until 2029. The law is clear: The president does not have the power to fire FTC commissioners at will. The FTC’s focus on protecting privacy has been particularly important over the last five years; with Alvaro's firing, the Trump Administration has stepped far away from that needed focus to protect all of us as users of digital technologies.

We hope you’ll take some time to listen to this May 2023 conversation with Alvaro about the better digital world he’s been trying to build through his work at the FTC and his previous work as the founding director of the Center on Privacy & Technology at Georgetown University Law Center.

Dr. Seuss wrote a story about a Hawtch-Hawtcher Bee-Watcher whose job it is to watch his town’s one lazy bee, because “a bee that is watched will work harder, you see.” But that doesn’t seem to work, so another Hawtch-Hawtcher is assigned to watch the first, and then another to watch the second... until the whole town is watching each other watch a bee.

play
Privacy info. This embed will serve content from simplecast.com

 Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

To Federal Trade Commissioner Alvaro Bedoya, the story—which long predates the internet—is a great metaphor for why we must be wary of workplace surveillance, and why we need to strengthen our privacy laws. Bedoya has made a career of studying privacy, trust, and competition, and wishes for a world in which we can do, see, and read what we want, living our lives without being held back by our identity, income, faith, or any other attribute. In that world, all our interactions with technology —from social media to job or mortgage applications—are on a level playing field. 

Bedoya speaks with EFF’s Cindy Cohn and Jason Kelley about how fixing the internet should allow all people to live their lives with dignity, pride, and purpose.

In this episode, you’ll learn about: 

  • The nuances of work that “bossware,” employee surveillance technology, can’t catch. 
  • Why the Health Insurance Portability Accountability Act (HIPAA) isn’t the privacy panacea you might think it is. 
  • Making sure that one-size-fits-all privacy rules don’t backfire against new entrants and small competitors. 
  • How antitrust fundamentally is about small competitors and working people, like laborers and farmers, deserving fairness in our economy. 

Alvaro Bedoya was nominated by President Joe Biden, confirmed by the U.S. Senate, and sworn in May 16, 2022 as a Commissioner of the Federal Trade Commission; his term expires in 2029. Bedoya was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. He has been influential in research and policy at the intersection of privacy and civil rights, and co-authored a 2016 report on the use of facial recognition by law enforcement and the risks that it poses. He previously served as the first Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law after its founding in 2011, and as Chief Counsel to former U.S. Sen. Al Franken (D-MN); earlier, he was an associate at the law firm WilmerHale. A naturalized immigrant born in Peru and raised in upstate New York, Bedoya previously co-founded the Esperanza Education Fund, a college scholarship for immigrant students in the District of Columbia, Maryland, and Virginia. He also served on the Board of Directors of the Hispanic Bar Association of the District of Columbia. He graduated summa cum laude from Harvard College and holds a J.D. from Yale Law School, where he served on the Yale Law Journal and received the Paul & Daisy Soros Fellowship for New Americans.

Transcript

ALVARO BEDOYA
One of my favorite Dr. Seuss stories is about this town called Hawtch Hawtch. So, in the town of Hawtch Hawtch, there's a town bee and you know, they presumably make honey, but the Hawtch Hawtcher one day realize that the bee that is watched will work harder you see? And so they hire a Hawtch Hawtcher to be on bee watching watch, but then you know, the bee isn't really doing much more than it normally is doing. And so they think, oh, well, the Hawtch Hawtcher is not watching hard enough. And so they hire another hot hocher to be on bee watcher watcher watch, I think is what Dr. Seuss calls it. And so there's this wonderful drawing of 12 Hawtch Hawtchers, you know, each one and either watching, watching watch, or actually, you know, the first one's watching the bee and, and the whole thing is just completely absurd.

CINDY COHN
That’s FTC Commissioner Alvaro Bedoya describing his favorite Dr. Seuss story – which he says works perfectly as a metaphor for why we need to be wary of workplace surveillance, and strengthen our privacy laws.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley. EFF’s Associate Director of Digital Strategy. This is our podcast, How to Fix the Internet.

Our guest today is Alvaro Bedoya. He’s served as a commissioner for the Federal Trade Commission since May of 2022, and before that he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. So he thinks a lot about many of the issues we’re also passionate about at EFF – trust, privacy, competition, for example – and about how these issues are all deeply intertwined

CINDY COHN
We decided to start with our favorite question: What does the world look like if we get this stuff right?

ALVARO BEDOYA
For me, I think it is a world where you wake up in the morning, live your life and your ability to do what you want to do. See what you wanna see. Read what you wanna read and live the life that you want to live is unconnected to who you are in a good way.

In other words, what you look like, what side of the tracks you're from, how much money you have. Your gender, your gender identity, your sexuality, your religious beliefs, that those things don't hold you down in any way, and that you can love those things and have those things be a part of your life. But that they only empower you and help you. I think it's also a world… we see the great parts of technology. You know, one of the annoying things of having worked in privacy for so long is that you're often in this position where you have to talk about how technology hurts people. Technology can be amazing, right?

Mysterious, wonderful, uh, empowering. And so I think this is a world where those interactions are defined by those positive aspects of technology. And so for me, when I think about where those things go wrong, sorry, falling into old tropes here, but thinking about it positively, increasingly, people are applying for jobs online. They're applying for mortgages online. They are doing all these capital letter decisions that are now mediated by technology.

And so this world is also a world where, again, you are treated fairly in those decisions and you don't have to think twice about, hold on a second, I just applied for a loan. I just applied for a job, you know, I just applied for a mortgage. Is my zip code going to be used against me? Is my social media profile, you know, that reveals my interests gonna be used against me. Is my race gonna be used against me? In this world, none of that happens, and you can focus on preparing for that job interview and finding the right house for you and your family, finding the right rental for you and your family.

Now, I think it's also a world where you can start a small business without fear that the simple fact that you're not connected to a bigger platform or a bigger brand won't be used against you, where you have a level playing field to win people over.

CINDY COHN
I think that's great. You know, leveling the playing field is one of the original things that we were hoping, you know, that digital technologies could do. It also makes me think of that old New Yorker thing, you know, on the internet, no one knows you're a dog.

ALVARO BEDOYA
(Laughs) Right.

CINDY COHN
In some ways I think the vision is on the internet. You know, again, I don't think that people should leave the other parts of their lives behind when they go on the internet. Your identity matters, but that it doesn't, the fact that you're a dog doesn't mean you can't play. I'm probably butchering that poor cartoon too much.

ALVARO BEDOYA
No, I don't. I don't think you are, but I don't know why it did, but it reminded me of one other thing, which is in this world, you, you go to a. Whether it's at home in your basement like I am now, you know, or in your car or at an office, uh, uh, at a business. And you have a shot at working with pride and dignity where every minute of your work isn't measured and quantified. Where you have the ability to focus on the work rather than the surveillance of that work and the judgments that other people might make around that minute surveillance and, and you can focus on the work itself. I think too often people don't recognize the strangeness of the fact that when you watch tv, when you watch a streaming site, when you watch cable, when you go shopping, all of that stuff is protected by privacy law. And yet most of us spend a good part of our waking hours working and there are. Really no federal, uh, uh, worker privacy protections. That, for me is, is one of the biggest gaps in our sectoral privacy system that we've yet to confront.

But the world that you wanted me to talk about definitely is a world where you can go to work and do that work with dignity and pride, uh, without minute surveillance of everything you.

CINDY COHN
Yeah. And I think inherent in that is this, you know, this, this observation that, you know, being watched all the time doesn't work as a matter of humanity, right? It's a human rights issue to be watched all the time. I mean, that's why when they build prisons, right, it's the panopticon, right? That's where that idea comes from, is this idea that people who have lost their liberty get watched all the time.

So that has to be a part of building this better future, a space where, you know, we’re not being watched all the time. And I think you're exactly right that we kind of have this gigantic hole in people's lives, which is their work lives where it's not only that people don't have enough freedom right now, it's actually headed in the other direction. I know this is something that we think about a lot, especially Jason does at EFF.

JASON KELLEY
Yeah, I mean we, we write quite a bit about Boss Ware. We've done a variety of research into Boss Ware technology. I wonder if you could talk a little bit about maybe like some concrete examples that you've seen where that technology is sort of coming to fruition, if you will. Like it's being used more and more and, and why we need to, to tackle it, because I think a lot of people probably, uh, listening to this aren't, aren't as familiar with it as they could be.

And at the top of this episode we heard you describe your favorite Dr. Seuss tale – about the bees and the watchers, and the watchers watching the watchers, and so on to absurdity. Now can you tell us why you think that’s such an important image?

ALVARO BEDOYA
I think it's a valuable metaphor for the fact that a lot of this surveillance software may not offer as complete a picture as employers might think it does. It may not have the effect that employers think it does, and it may not ultimately do what people want it to do. And so I think that anyone who is thinking about using the software should ask hard questions about ‘is this actually gonna capture what I'm being told it will capture? Does it account for the 20% tasks of my workers' jobs?’ So, you know, there's always an 80/20 rule and so, you know, as with, as with work, most of what you do is one thing, but there's usually 20% that's another thing. And I think there's a lot of examples where that 20%, like, you know, occasionally using the bathroom right, isn't accounted for by the software. And so it looks like the employee’s slacking, but actually they're just being a human being. And so I would encourage people to ask hard questions about the sophistication of the software and how it maps onto the realities of work.

JASON KELLEY
Yeah. That's a really accurate way for people to start to think about it because I think a lot of people really feel that. Um, if they can measure it, then it must be useful.

ALVARO BEDOYA
Yes!

JASON KELLEY
In my own experience, before I worked at EFF, I worked somewhere where, eventually, a sort of boss ware type tool was installed and it had no connection to the job I was doing.

ALVARO BEDOYA
That’s interesting.

JASON KELLEY
It was literally disconnected.

ALVARO BEDOYA:
Can you share the general industry?

JASON KELLEY
It was software. I worked as a, I was in marketing for a software company and um, I was remote and it was remote way before p the pandemic. So, you know, there's sort of, I think boss ware has increased probably during the pandemic. I think we've seen that because people are worried that if you're not in the office, you're not working.

ALVARO BEDOYA
Right.

JASON KELLEY
There's no evidence, boss wear can't give evidence that that's true. It can just give evidence in, you know, whether you're at your computer –

ALVARO BEDOYA
Right. Whether you're typing.

JASON KELLEY
Whether you're typing. Yeah. And what happened in my scenario without going into too much detail was that it mattered what window I was in. and it didn't always, at first it was just like, are you at your computer for eight hours? And then it was, are you at your computer in these specific windows for eight hours? And then it was, are you typing in those specific windows for eight hours? The screws kept getting twisted, right, until I was actually at my computer for 12 hours to get eight hours of ‘productive’ work in, as it was called.

And so, yeah, I left that job. Obviously, I work at EFF now for a reason. And is was one of the things that I remember when I started at EFF, part of what I like about what we do is that we think about people's humanity in what they're doing and how that interacts with technology.

And I think boss ware is one of those areas where it doesn't, um, because it, it is so common for an employer to sort of disengage from the employee and sort of think of them as like a tool. It's, it's an area where it's easy for to install something or try to install something where that happens. So I'm glad you're working on it. It's definitely an issue.

ALVARO BEDOYA
Well, I'm thinking about it, you know, and it's certainly something I, I care about and, and I think, I think my hope is, My hope is that, um, you know, the pandemic was horrific. Is horrific. My hope is that one of the realizations coming out of it from so many people going remote is the realization that particularly for some jobs, you know, uh, um, a lot of us are lucky to have these jobs where a lot of our time turns.

Being able to think clearly and carefully about a, about something, and that's a luxury. Um, but particularly for those jobs, my, my suspicion is for an even broader range of jobs that this idea of a workday where you sit down, work eight hours and sit up, you know, and, and that is the ideal workday I don't think that's a maximally productive day, and I think there's some really interesting trials around the four-day work week, and my hope is that, you know, when my kids are older, that there will be a recognition that working harder, staying up later, getting up earlier, is not the best way to get the best work from people. And people need time to think. They need time to relax. They need time to process things. And so that is my hope that that is one of the realizations around it. But you're exactly right, Jason, is that one of my concerns around this software is that there's this idea that if it can be measured, it must be important. And I think you use a great example, speaking in general here, that of software that may presume that if you aren't typing, you're not working, or if you're not in a window, you're not working, when actually you might be doing the most important work. You know, jotting down notes, organizing your thoughts, that lets you do the best stuff as it were.

Music transition

JASON KELLEY
I want to jump in for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

Now back to our conversation with Alvaro Bedoya.

CINDY COHN
Privacy issues are of course near and dear to our hearts at EFF and I know that's really the world you come out of as well. Although your perch is a little, a little different right now. We came to the conclusion that we can't address privacy if we don't address competition and antitrust issues. And I think you've come someplace similar perhaps, and I'd love for you to talk about how you think privacy and questions around competition and antitrust intertwine.

ALVARO BEDOYA
So I will confess, I don't know if I have figured it out, but I can offer a few thoughts. First of all, I think that a lot of the antitrust claims are not what they seem to be. When companies talk about how important it is to have gatekeeping around app stores because of privacy and this is one of the reasons I support the bills, I think it's Blumenthal Blackburn bill to, um, to change the way app stores are, are run and, and, and kick the tires on that gatekeeping model because I am skeptical about a lot of those pro-privacy, anti-antitrust claims, that is one thing. On the other hand, I do think we need to think carefully about the rules that are put in place, backfiring against new entrants and small competitors. And I think a lot of legislators and policy makers in the US and Europe appreciate this and are getting this right and institute a certain set of rules for bigger companies and different ones for smaller ones, I think one of the ways this can go wrong is when it's just about the size of the company rather than the size of the user base.

I think that if you are, you know, suddenly of a hundred million users that you're not a small company, even if you have, you know, a small number of employees, but I, I do think that those concerns are real and that that policy makers and people in my role need to think about the costs of privacy compliance in a way that does not inadvertently create an unlevel playing field for, for small competitors.

I will confess that sometimes things that appear to be, uh, um, antitrust problems are privacy problems in that they reflect legal gaps around the sectoral privacy framework that unfortunately has yet to be updated. So I think I can give one example where there was the recent merger of, uh, Amazon and One Medical, and, well, I can't go into the antitrust analysis that may or may not have occurred at the commission. I wrote a statement on the completion of the merger, which highlighted a gap that we have around the anonymization rule in our health privacy law. For example, people think that HIPAA is actually the Health Information Privacy Act. It's not, it's actually the Health Insurance Portability Accountability Act. And I think that little piece of common wisdom speaks to a broader gap in our understanding of health privacy. So I think a lot of people think HIPAA will protect their data and that it won't be used in other ways by their doctor, by whoever it is that has their HIPAA protected data. Well, it turns out that in 2000 when HHS promulgated. The privacy rule in good faith, it had a provision that said, Hey, look, we want to encourage the improvement in health services. We want to encourage health research and we want to encourage public health. And so we're gonna say that if you remove these, you know, 18 identifiers from health data, that it can be used for other purposes and if you look at the rule that was issued, the justification for it is that they want to promote public health.

Unfortunately, they did not put a use restriction on that. And so now, if any, doctor's practice, anyone covered by HIPAA, and I'm not gonna go into the rabbit hole of who is and who isn't, but if you're covered by HIPAA, All they need to do is remove those identifiers from the data.

And HHS is unfortunately very clear that you can essentially do a whole lot of things that have nothing to do with healthcare as long as you do that. And what I wrote in my statement is that would surprise most consumers. Frankly, it surprised me when I connected the dots.

CINDY COHN
What I'm hearing here, which I think is really important is, first of all, we start off by thinking that some of our privacy problems are really due to antitrust concerns, but what we learn pretty quickly when we're looking at this is, first of all, privacy is used frankly, as a blocker for common sense reforms that we might need, that these giants come in and they say, well, we're gonna protect people's privacy by limiting what apps are in the app store. And, and we need to look closely at that because it doesn't seem to be necessarily true.

So first of all, you have to watch out for the kind of fake privacy argument or the argument that the tech giants need to be protected because they're protecting our privacy and we need to really interrogate that. And at the bottom of it, it often comes down to the fact that we haven't really protected people's privacy as a legal matter, right? We, we, We ground ourselves in Larry Lessig, uh, four pillars of change, right? Code, norms, laws, and markets. And you know, what they're saying is, well, we have to protect, you know, essentially what is a non-market, but the, the tech giants, that markets will protect privacy and so therefore we can't introduce more competition. And I think at the bottom of this, what we find a lot is that it's, you know, the law should be setting the baseline, and then markets can build on top of that. But we've got things a little backwards. And I think that's especially true in health. It's, it's, it's very front and center for those of us who care about reproductive justice, who are looking at the way health insurance companies are now part and parcel of other data analysis companies. And the Amazon/One Medical one is, is another one of those that unless we get the privacy law right, it's gonna be hard to get at some of these other problems.

ALVARO BEDOYA
Yeah. And those are the three things that I think a lot about first, that those propri arguments that seem to cut against, uh, competition concerns are often not what they seem.

Second, that we do need to take into account how one size fits all privacy rules could backfire in a way that hurts, uh, small companies, small competitors, uh, who are the lifeblood of, uh, innovation and employment frankly. And, and lastly, Sometimes what we're actually seeing are gaps in our sectoral privacy system.

CINDY COHN
One of the things that I know you've, you've talked about a little bit is, um, you're calling it a return to fairness, and that's specifically talking about a piece of the FTC’s authority. And I wonder if you could talk about that a little more and how you see that fitting into a, a better world.

ALVARO BEDOYA
Sure. One of the best parts of this job, um, was having this need and opportunity to immerse myself in antitrust. So as a Senate staffer, I did a little bit of work on the Comcast, uh, NBC merger against, against that merger, uh, for my old boss, Senator Franken. But I didn't spend a whole lot of time on competition concerns. And so when I was nominated, I, you know, quite literally, you know, ordered antitrust treatises and read them cover to cover.

CINDY COHN
Wonderful!

ALVARO BEDOYA
Well, sometimes it's wonderful and sometimes it's not. But in this case it was. And what you see is this complete two-sided story where on the one hand you have this really anodyne, efficiency-based description of antitrust, where it is about enforcing abstract laws and maximizing efficiency and the saying, you know antitrust is about protects competition, not competitors, and you so quickly lose sight of why we have antitrust laws and how we got them.

And so I didn't just read treatises on the law. I also read histories. And one of the things that you read and realize when you read those histories is that antitrust isn't about efficiency, antitrust is about people. And yes, it's about protecting competition, but the reason we have it is because of what happened to certain people. And so, you know, the Sherman Act, you listen to those floor debates, it is fascinating because first of all, everyone agrees as to what we want to do, what Congress wanted to do. Congress wanted to reign in the trust they wanted to reign in John Rockefeller, JP Morgan, the beef trust, the sugar trust, the steel trust. Not to mention, you know, the Rockefeller's Oil Trust. The most common concern on the floor of the Senate was what was happening to cattlemen because of concentration in meat packing plants and the prices they were getting when they brought their cattle to processors, and to market. And then you look at, uh, 1914, the Clayton Act again. There was outrage, true outrage about how those antitrust laws, you know, 10 out of the first 12 antitrust injunctions in our, in our country post-Sherman, were targeted at workers and not just any workers. They were targeted at rail car manufacturers in Pullman, where it was an integrated workforce and they were working extremely long hours for a pittance and wages, and they decided to strike.

And some of the first injunctions we saw in this country were used to. Their strike or how it was used against, uh, uh, I think they're called drayage men or dray men in New Orleans, port workers and dock workers in New New Orleans, who again, were working these 12 hour days for, for nothing in wages. And this beautiful thing happened in New Orleans where the entire city went on strike.

It was, I think it was 30 unions. It was like the typographical workers unions. And if you think that that refers to people typing on keyboards, it does. From the people typing on mechanical typewriters to the people, you know, unload loading ships in the dock of, in the port of New Orleans, everyone went on strike and they had this, this organization called the Amalgamated Working Men's Council. And um, and they went, they wanted a 10 hour, uh, uh, workday. They wanted overtime pay, and they wanted, uh, uh, union shops. They got two out of those three things. But, um, but I think it was the trade board was so unhappy with it that they, uh, persuaded federal prosecutors to sue under Sherman.

And it went before Judge Billings. And Judge Billings said, absolutely this is a violation of the antitrust laws. And the curious thing about Judge Billings decision is one of the first German decisions in a federal court, and he didn't cite for the proposition that the strike was a restraint on trade to restrain on trade law. He cited to much older decisions about criminal conspiracies and unions to justify his decision.

And so what I'm trying to say is over and over and over again, whenever, you know, you look at the actual history of antitrust laws, you know, it isn't about efficiency, it's about fairness. It is about how small competitors and working people, farmers, laborers, deserve a level playing field. And in 1890, 1914, 1936, 1950, this was what was front and center for Congress.

CINDY COHN
It's great to end with a deep dive into the original intent of Congress to protect ordinary people and fairness with antitrust laws, especially in this time when history and original intent are so powerful for so many judges. You know, it’s solid grounding for going forward. But I also appreciate how you mapped the history to see how that Congressional intent was perverted by the judicial branch almost from the very start.

This shows us where we need to go to set things right but also that it’s a difficult road. Thanks so much Alvaro.

JASON KELLEY
Well, it's a rare privilege to get to complain about a former employer directly to a sitting FTC commissioner. So that was a very enjoyable conversation for me. It's also rare to learn something new about Dr. Seuss and a Dr. Seuss story, which we got to do. But as far as actual concrete takeaways go from that conversation, Cindy, what did you pull away from that really wide ranging discussion?

CINDY COHN
It’s always fun to talk to Alvaro. I loved his vision of a life lived with dignity and pride as the goal of our fixed internet. I mean those are good solid north stars, and from them we can begin to see how that means that we use technology in a way that, for example, allows workers to just focus on their work. And honestly, while that gives us dignity, it also stops the kind of mistakes we’re seeing like tracking keystrokes, or eye contact as secondary trackers that are feeding all kinds of discrimination.

So I really appreciate him really articulating, you know, what are the kinds of lives we wanna have. I also appreciate his thinking about the privacy gaps that get revealed as technology changes and, and the, the story of healthcare and how HIPAA doesn't protect us in the way that we'd hoped to protect us, in part because I think HIPAA didn't start off at a very good place, but as things have shifted and say, you know, one medical is being bought by Amazon, suddenly we see that the presumption of who your insurance provider was and what they might use that information for, has shifted a lot, and that the privacy law hasn't, hasn't kept up.

So I appreciate thinking about it from, you know, both of those perspectives, both, you know, what the law gets wrong and how technology can reveal gaps in the law.

JASON KELLEY
Yeah. That really stood out for me as well, especially the parts where Alvero was talking about looking into the law in a way that he hadn't had to before. Like you say, because that is kind of what we do at EFF at least part of what we do. And it's nice to hear that we are sort of on the same page and that there are people in government doing that. There are people at EFF doing that. There are people all over, in different areas doing that. And that's what we have to do because technology does change so quickly and so much.

CINDY COHN
Yeah, and I really appreciate the deep dive he's done into antitrust law and, and revealing really the, the, the fairness is a deep, deep part of it. And this idea that it's only about efficiency and especially efficiency for consumers only. It's ahistorical. And that's a good thing for us all to remember since we, especially these days have a Supreme Court that is really, you know, likes history a lot and grounds and limits what it does in history. The history's on our side in terms of, you know, bringing competition law, frankly, to the digital age.

JASON KELLEY
Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member or donate, or look at hoodies, t-shirts, hats or other merch.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

MUSIC CREDITS

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:

Lost track by airtone
Common ground by airtone
Probably shouldn’t by J Lang

A Win for Encryption: France Rejects Backdoor Mandate

Par : Joe Mullin
21 mars 2025 à 15:33

In a moment of clarity after initially moving forward a deeply flawed piece of legislation, the French National Assembly has done the right thing: it rejected a dangerous proposal that would have gutted end-to-end encryption in the name of fighting drug trafficking. Despite heavy pressure from the Interior Ministry, lawmakers voted Thursday night (article in French) to strike down a provision that would have forced messaging platforms like Signal and WhatsApp to allow hidden access to private conversations.

The vote is a victory for digital rights, for privacy and security, and for common sense.

The proposed law was a surveillance wishlist disguised as anti-drug legislation. Tucked into its text was a resurrection of the widely discredited "ghost” participant model—a backdoor that pretends not to be one. Under this scheme, law enforcement could silently join encrypted chats, undermining the very idea of private communication. Security experts have condemned the approach, warning it would introduce systemic vulnerabilities, damage trust in secure communication platforms, and create tools ripe for abuse.

The French lawmakers who voted this provision down deserve credit. They listened—not only to French digital rights organizations and technologists, but also to basic principles of cybersecurity and civil liberties. They understood that encryption protects everyone, not just activists and dissidents, but also journalists, medical professionals, abuse survivors, and ordinary citizens trying to live private lives in an increasingly surveilled world.

A Global Signal

France’s rejection of the backdoor provision should send a message to legislatures around the world: you don’t have to sacrifice fundamental rights in the name of public safety. Encryption is not the enemy of justice; it’s a tool that supports our fundamental human rights, including the right to have a private conversation. It is a pillar of modern democracy and cybersecurity.

As governments in the U.S., U.K., Australia, and elsewhere continue to flirt with anti-encryption laws, this decision should serve as a model—and a warning. Undermining encryption doesn’t make society safer. It makes everyone more vulnerable.

This victory was not inevitable. It came after sustained public pressure, expert input, and tireless advocacy from civil society. It shows that pushing back works. But for the foreseeable future, misguided lobbyists for police national security agencies will continue to push similar proposals—perhaps repackaged, or rushed through quieter legislative moments.

Supporters of privacy should celebrate this win today. Tomorrow, we will continue to keep watch.

State AGs Must Act: EFF Expands Call to Investigate Crisis Pregnancy Centers

Back in January, EFF called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable crisis pregnancy centers (CPCs) that engage in deceptive practices. Since then, some of these centers have begun to change their websites, quietly removing misleading language and privacy claims; the Hawaii legislature is considering a bill calling on the attorney general to investigate CPCs in the state, and legislators in Georgia have introduced a slate of bills to tackle deceptive CPC practices.

But there is much more to do. Today, we’re expanding our call to attorneys general in Tennessee, Oklahoma, Nebraska, and North Carolina, urging them to investigate the centers in their states.

Many CPCs have been operating under a veil of misleading promises for years—suggesting that clients’ personal health data is protected under HIPAA, even though numerous reports suggest otherwise; that privacy policies are not followed consistently, and that clients' personal data may be shared across networks without appropriate consent. For example, in a case in Louisiana, we saw firsthand how a CPC inadvertently exposed personal data from multiple clients in a software training video. This kind of error not only violates individuals’ privacy but could also lead to emotional and psychological harm for individuals who trusted these centers with their sensitive information.

We list multiple examples from CPCs in each of the states that claim to comply with HIPAA in our letters to Attorneys General Hilgers, Jackson, Drummond, and Skrmetti. Those include:

  • Gateway Women’s Care in North Carolina claims that “we hold your right to confidentiality with the utmost care and respect and comply with HIPAA privacy standards, which protect your personal and health information” in a blog post titled “Is My Visit Confidential?” Gateway Women’s Care received $56,514 in government grants in 2023. 
  • Assure Women’s Center in Nebraska stresses that it is “HIPAA compliant!” in a blog post that expressly urges people to visit them “before your doctor.”

As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices.

It’s time for state and federal leaders to reassess how public funds are allocated to these centers. Our elected officials are responsible for ensuring that personal information, especially our sensitive medical data, is protected. After all, no one should have to choose between their healthcare and their privacy.

EFFecting Change: Is There Hope for Social Media?

13 mars 2025 à 16:21

Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. 

EFFecting Change Livestream Series:
Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific - Check Local Time
This event is LIVE and FREE!

RSVP Today

Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?

Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving.

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Hawaii Takes a Stand for Privacy: HCR 144/HR 138 Calls for Investigation of Crisis Pregnancy Centers

In a bold push for medical privacy, Hawaii's House of Representatives has introduced HCR 144/HR 138, a resolution calling for the Hawaii Attorney General to investigate whether crisis pregnancy centers (CPCs) are violating patient privacy laws. 

Often referred to as "fake clinics" or “unregulated pregnancy centers” (UPCs), these are non-medical centers that provide  free pregnancy tests and counseling, but typically do not offer essential reproductive care like abortion or contraception. In Hawaii, these centers outnumber actual clinics offering abortion and reproductive healthcare. In fact, the first CPC in the United States was opened in Hawaii in 1967 by Robert Pearson, who then founded the Pearson Foundation, a St. Louis-based organization to assist local groups in setting up unregulated crisis pregnancy centers. 

EFF has called on state AGs to investigate CPCs across the country. In particular, we are concerned that many centers have misrepresented their privacy practices, including suggesting that patient information is protected by HIPAA when it may not be. In January, EFF contacted attorneys general in Florida, Texas, Arkansas, and Missouri asking them to identify and hold accountable CPCs that engage in deceptive practices.

Rep. Kapela’s resolution specifically references EFF’s call on state Attorneys General. It reads:

“WHEREAS, the Electronic Frontiers Foundation, an international digital rights nonprofit that promotes internet civil liberties, has called on states to investigate whether crisis pregnancy centers are complying with patient privacy regulations with regard to the retention and use of collected patient data.” 

HCR 144/HR 138 underscores the need to ensure that healthcare providers handle personal data, particularly medical data, securely and transparently.. Along with EFF’s letters to state AGs, the resolution refers to the increasing body of research on the topic, such as: 

  • A 2024 Healthcare Management Associates Study showed that CPCs received $400 million in federal funding between 2017 and 2023, with little oversight from regulators.
  • A Health Affairs article from November 2024 titled "Addressing the HIPAA Blind Spot for Crisis Pregnancy Centers" noted that crisis pregnancy centers often invoke the Health Insurance Portability and Accountability Act (HIPAA) to collect personal information from clients.

Regardless of one's stance on reproductive healthcare, there is one principle that should be universally accepted: the right to privacy. As HCR 144/HR 138 moves forward, it is imperative that Hawaii's Attorney General investigate whether CPCs are complying with privacy regulations and take action, if necessary, to protect the privacy rights of individuals seeking reproductive healthcare in Hawaii. 

Without comprehensive privacy laws that offer individuals a private right of action, state authorities must be the front line in safeguarding the privacy of their constituents. As we continue to advocate for stronger privacy protections nationwide, we encourage lawmakers and advocates in other states to follow Hawaii's lead and take action to protect the medical privacy rights of all of their constituents.

The Senate Passed The TAKE IT DOWN Act, Threatening Free Expression and Due Process

25 février 2025 à 16:10

Earlier this month, the Senate passed the TAKE IT DOWN Act (S. 146), by a voice vote. The bill is meant to speed up the removal of non-consensual intimate imagery, or NCII, including videos that imitate real people, a technology sometimes called “deepfakes.” 

Protecting victims of these heinous privacy invasions is a legitimate goal. But good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve. 

This misguided bill can still be stopped in the House of Representatives. Help us speak out against it now. 

take action

"Take It Down" Has No real Safeguards  

Before this vote, EFF, along with the Center for Democracy & Technology (CDT), Authors Guild, Demand Progress Action, Fight for the Future, Freedom of the Press Foundation, New America’s Open Technology Institute, Public Knowledge, Restore The Fourth, SIECUS: Sex Ed for Social Change, TechFreedom, and Woodhull Freedom Foundation, sent a letter to the Senate, asking them to change this legislation to protect legitimate speech that is not NCII. Changes are also needed to protect users who rely on encrypted services.

The letter explains that the bill’s “takedown” provision applies to a much broader category of content—potentially any images involving intimate or sexual content at all—than the narrower NCII definitions found elsewhere in the bill. The bill contains no protections against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims

This would likely lead to the use of often-inaccurate automated filters that are infamous for flagging legal content, from fair-use commentary to news reporting. Communications providers that offer users end-to-end encrypted messaging, meanwhile, may be served with notices they simply cannot comply with, given the fact that these providers cannot view the contents of messages on their platforms. Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces. 

Congress should focus on enforcing and improving the many existing civil and criminal laws that address NCII, rather than opting for a broad takedown regime that is bound to be abused. Tell your Member of Congress to oppose censorship and to oppose S. 146. 

take action

Tell the house to stop "Take it down" 



Further reading:

New Yorkers Deserve Stronger Health Data Protections Now—Governor Hochul Can Make It Happen

25 février 2025 à 11:34

With the rise of digital surveillance, securing our health data is no longer just a privacy issue—it's a matter of personal safety. In the wake of the Supreme Court's reversal of Roe v. Wade and the growing restrictions on abortion and gender-affirming care, protecting our personal health data has never been more important. And in a world where nearly half of U.S. states have either banned or are on the brink of banning abortion, unfettered access to personal health data is an even more dangerous threat.

That’s why EFF joins the New York Civil Liberties Union (NYCLU) in urging Governor Hochul to sign the New York Health Information Privacy Act (A.2141/S.929). This legislation is a crucial step toward safeguarding the digital privacy of New Yorkers at a time when health data is increasingly vulnerable to misuse.

Why Health Data Privacy Matters

When individuals seek reproductive health care or gender-affirming care, they leave behind a digital trail. Whether through search histories, email exchanges, travel itineraries, or data from period-tracker apps and smartwatches, every click, every action, and every step is tracked, often with little or no consent. And this kind of data—however collected—has already been used to criminalize individuals who were simply seeking health care

Unlike HIPAA, which regulates 'covered entities'—providers of treatment, payors/insurers—who are part of the traditional health care system and their ‘business associates,’ this bill would expand its reach to cover a broad range of 'new' entities. These include data brokers, tech companies, and others in the digital ecosystem, who can access and share this sensitive health information. The result is a growing web of entities collecting personal data, far beyond the scope of traditional health care providers.

For example, in some states, individuals have been investigated or even prosecuted based on their digital data, simply for obtaining abortion care. In a world where our health choices are increasingly monitored, the need for robust privacy protections is clearer than ever. The New York Health Information Privacy Act is the Empire State’s opportunity to lead the nation in protecting its residents.

What Does the Health Information Privacy Act Do?

At its core, the New York Health Information Privacy Act would provide vital protections for New Yorkers' electronic health data. Here’s what the bill does:

  • Prohibits the sale of health data: Health data is not a commodity to be bought and sold. This bill ensures that your most personal information is not used for profit by commercial entities without your consent.
  • Requires explicit consent: Before health data is processed, New Yorkers will need to provide clear, informed consent. The bill limits processing (storing, collecting, using) of personal data to “strictly necessary” purposes only, minimizing unnecessary collection.
  • Data deletion rights: Health data will be deleted by default after 60 days, unless the individual requests otherwise. This empowers individuals to control their data, ensuring that unnecessary information doesn’t linger.
  • Non-discrimination protections: Individuals will not face discrimination or higher costs for exercising their privacy rights. No one should be penalized for wanting to protect their personal information.

Why New York Needs This Bill Now

The need for these protections is urgent. As digital surveillance expands, so does the risk of personal health data being used against individuals. In a time when personal health decisions are under attack, it’s crucial that New Yorkers have control over their health information. By signing this bill, Governor Hochul would ensure that out-of-state actors cannot easily access New Yorkers’ health data without due process, protecting individuals from legal actions in states that criminalize reproductive and gender-affirming care.

However, this bill still faces a critical shortcoming—the absence of a private right of action (PRA). Without it, individuals cannot directly sue companies for privacy violations, leaving them vulnerable. Accountability would fall solely on the Attorney General, who would need the resources to quickly and consistently enforce the new law. Nonetheless, the Attorney General’s role will now be critical in ensuring this bill is upheld, and they must remain steadfast in implementing these protections effectively.

Governor Hochul: Sign A.2141/S.929

The importance of this legislation cannot be overstated—it is about protecting people from potential legal actions related to their health care decisions. By signing this bill, Governor Hochul would solidify New York’s position as a leader in health data privacy and take a firm stand against the misuse of personal information.

New York has the power to protect its residents and set a strong precedent for privacy protections across the nation. Let’s ensure that personal health data remains in the hands of those who own it—the individuals themselves.

Governor Hochul: This is your chance to make a difference. Let’s take action now to protect what matters most—our health, our data, and our rights. Sign A.2141/ S.929 today.

The Judicial Conference Should Continue to Liberally Allow Amicus Briefs, a Critical Advocacy Tool

Par : Sophia Cope
21 février 2025 à 19:56

EFF does a lot of things, including impact litigation, legislative lobbying, and technology development, all to fight for your civil liberties in the digital age. With litigation, we directly represent clients and also file “amicus” briefs in court cases.

An amicus brief, also called a “friend-of-the-court” brief, is when we don’t represent one of the parties on either side of the “v”—instead, we provide the court with a helpful outside perspective on the case, either on behalf of ourselves or other groups, that can help the court make its decision.

Amicus briefs are a core part of EFF’s legal work. Over the years, courts at all levels have extensively engaged with and cited our amicus briefs, showing that they value our thoughtful legal analysis, technical expertise, and public interest mission.

Unfortunately, the Judicial Conference—the body that oversees the federal court system—has proposed changes to the rule governing amicus briefs (Federal Rule of Appellate Procedure 29) that would make it harder to file such briefs in the circuit courts.

EFF filed comments with the Judicial Conference sharing our thoughts on the proposed rule changes (a total of 407 comments were filed). Two proposed changes are particularly concerning.

First, amicus briefs would be “disfavored” if they address issues “already mentioned” by the parties. This language is extremely broad and may significantly reduce the amount and types of amicus briefs that are filed in the circuit courts. As we said in our comments:

We often file amicus briefs that expand upon issues only briefly addressed by the parties, either because of lack of space given other issues that party counsel must also address on appeal, or a lack of deep expertise by party counsel on a specific issue that EFF specializes in. We see this often in criminal appeals when we file in support of the defendant. We also file briefs that address issues mentioned by the parties but additionally explain how the relevant technology works or how the outcome of the case will impact certain other constituencies.

We then shared examples of EFF amicus briefs that may have been disfavored if the “already mentioned” standard had been in effect, even though our briefs provided help to the courts. Just two examples are:

  • In United States v. Cano, we filed an amicus brief that addressed the core issue of the case—whether the border search exception to the Fourth Amendment’s warrant requirement applies to cell phones. We provided a detailed explanation of the privacy interests in digital devices, and a thorough Fourth Amendment analysis regarding why a warrant should be required to search digital devices at the border. The Ninth Circuit extensively engaged with our brief to vacate the defendant’s conviction.
  • In NetChoice, LLC v. Attorney General of Florida, a First Amendment case about social media content moderation (later considered by the Supreme Court), we filed an amicus brief that elaborated on points only briefly made by the parties about the prevalence of specialized social media services reflecting a wide variety of subject matter focuses and political viewpoints. Several of the examples we provided were used by the 11th Circuit in its opinion.

Second, the proposed rules would require an amicus organization (or person) to file a motion with the court and get formal approval before filing an amicus brief. This would replace the current rule, which also allows an amicus brief to be filed if both parties in the case consent (which is commonly what happens).

As we stated in our comments: “Eliminating the consent provision will dramatically increase motion practice for circuit courts, putting administrative burdens on the courts as well as amicus brief filers.” We also argued that this proposed change “is not in the interests of justice.” We wrote:

Having to write and file a separate motion may disincentivize certain parties from filing amicus briefs, especially people or organizations with limited resources … The circuits should … facilitate the participation by diverse organizations at all stages of the appellate process—where appeals often do not just deal with discrete disputes between parties, but instead deal with matters of constitutional and statutory interpretation that will impact the rights of Americans for years to come.

Amicus briefs are a crucial part of EFF’s work in defending your digital rights, and our briefs provide valuable arguments and expertise that help the courts make informed decisions. That’s why we are calling on the Judicial Conference to reject these changes and preserve our ability to file amicus briefs in the federal appellate courts that make a difference.

Your support is essential in ensuring that we can continue to fight for your digital rights—in and out of court.

DONATE TO EFF

Cornered by the UK’s Demand for an Encryption Backdoor, Apple Turns Off Its Strongest Security Setting

Today, in response to the U.K.’s demands for a backdoor, Apple has stopped offering users in the U.K. Advanced Data Protection, an optional feature in iCloud that turns on end-to-end encryption for files, backups, and more.

Had Apple complied with the U.K.’s original demands, they would have been required to create a backdoor not just for users in the U.K., but for people around the world, regardless of where they were or what citizenship they had. As we’ve said time and time again, any backdoor built for the government puts everyone at greater risk of hacking, identity theft, and fraud.

This blanket, worldwide demand put Apple in an untenable position. Apple has long claimed it wouldn’t create a backdoor, and in filings to the U.K. government in 2023, the company specifically raised the possibility of disabling features like Advanced Data Protection as an alternative. Apple's decision to disable the feature for U.K. users could well be the only reasonable response at this point, but it leaves those people at the mercy of bad actors and deprives them of a key privacy-preserving technology. The U.K. has chosen to make its own citizens less safe and less free.

Although the U.K. Investigatory Powers Act purportedly authorizes orders to compromise security like the one issued to Apple, policymakers in the United States are not entirely powerless. As Senator Ron Wyden and Representative Andy Biggs noted in a letter to the Director of National Intelligence (DNI) last week, the US and U.K. are close allies who have numerous cybersecurity- and intelligence-sharing agreements, but “the U.S. government must not permit what is effectively a foreign cyberattack waged through political means.” They pose a number of key questions, including whether the CLOUD Act—an “encryption-neutral” law that enables special status for the U.K. to request data directly from US companies—actually allows the sort of demands at issue here. We urge Congress and others in the US to pressure the U.K. to back down and to provide support for US companies to resist backdoor demands, regardless of what government issues them.

Meanwhile, Apple is not the only company operating in the U.K. that offers end-to-end encryption backup features. For example, you can optionally enable end-to-end encryption for chat backups in WhatsApp or backups from Samsung Galaxy phones. Many cloud backup services offer similar protections, as do countless chat apps, like Signal, to secure conversations. We do not know if other companies have been approached with similar requests, but we hope they stand their ground as well.

If you’re in the U.K. and have not enabled ADP, you can no longer do so. If you have already enabled it, Apple will provide guidance soon about what to do. This change will not affect the end-to-end encryption used in Apple Messages, nor does it alter other data that’s end-to-end encrypted by default, like passwords and health data. But iCloud backups have long been a loophole for law enforcement to gain access to data otherwise not available to them on iPhones with device encryption enabled, including the contents of messages they’ve stored in the backup. Advanced Data Protection is an optional feature to close that loophole. Without it, U.K. users’ files and device backups will be accessible to Apple, and thus shareable with law enforcement.

We appreciate Apple’s stance against the U.K. government’s request. Weakening encryption violates fundamental rights. We all have the right to private spaces, and any backdoor would annihilate that right. The U.K. must back down from these overreaching demands and allow Apple—and others—to provide the option for end-to-end encrypted cloud storage.

EFF at RightsCon 2025

21 février 2025 à 12:31

EFF is delighted to be attending RightsCon again—this year hosted in Taipei, Taiwan between 24-27 February.

RightsCon provides an opportunity for human rights experts, technologists, activists, and government representatives to discuss pressing human rights challenges and their potential solutions. 

Many EFFers are heading to Taipei and will be actively participating in this year's event. Several members will be leading sessions, speaking on panels, and be available for networking.

Our delegation includes:

  • Alexis Hancock, Director of Engineering, Certbot
  • Babette Ngene, Public Interest Technology Director
  • Christoph Schmon, International Policy Director
  • Cindy Cohn, Executive Director
  • Daly Barnett, Senior Staff Technologist
  • David Greene, Senior Staff Attorney and Civil Liberties Director
  • Jillian York, Director of International Freedom of Expression
  • Karen Gullo, Senior Writer for Free Speech and Privacy
  • Paige Collings, Senior Speech and Privacy Activist
  • Svea Windwehr, Assistant Director of EU Policy
  • Veridiana Alimonti, Associate Director For Latin American Policy

We hope you’ll have the opportunity to connect with us during the conference, especially at the following sessions: 

Day 0 (Monday 24 February)

Mutual Support: Amplifying the Voices of Digital Rights Defenders in Taiwan and East Asia

09:00 - 12:30, Room 101C
Alexis Hancock, Director of Engineering, Certbot
Host institutions: Open Culture Foundation, Odditysay Labs, Citizen Congress Watch and FLAME

This event aims to present Taiwan and East Asia’s digital rights landscape, highlighting current challenges faced by digital rights defenders and fostering resonance with participants' experiences. Join to engage in insightful discussions, learn from Taiwan’s tech community and civil society, and contribute to the global dialogue on these pressing issues. The form to register is here

Platform accountability in crisis? Global perspective on platform accountability frameworks

09:00 - 13:00, Room 202A
Christoph Schmon, International Policy Director; Babette Ngene, Public Interest Technology Director
Host institutions: Electronic Frontier Foundation (EFF), Access Now

This high level panel will reflect on alarming developments in platforms' content policies and their enforcement, and discuss whether existing frameworks offer meaningful tools to counter the current platform accountability crisis. The starting point for the discussion will be Access Now's recently launched report Platform accountability: a rule-of-law checklist for policymakers. The panel will be followed by a workshop, dedicated to the “Draft Viennese Principles for Embedding Global Considerations into Human-Rights-Centred DSA enforcement”. Facilitated by the DSA Human Rights Alliance, the workshop will provide a safe space for civil society organisations to strategize and discuss necessary elements of a human rights based approach to platform governance.

Day 1 (Tuesday 25 February) 

Criminalization of Tor in Ola Bini’s case? Lessons for digital experts in the Global South

09:00 - 10:00 (online)
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Access Now, Centro de Autonomía Digital (CAD), Observation Mission of the Ola Bini Case, Tor Project

This session will analyze how the use of Tor is criminalized in Ola Bini´s case and its implications for digital experts in other contexts of criminalization in the Global South, especially when they defend human rights online. Participants will work through various exercises to: 1- Analyze, from a technical perspective, the judicial criminalization of Tor in Ola Bini´s case, and 2- Collectively analyze how its criminalization can affect (judicially) the work of digital experts from the Global South and discuss possible support alternatives.

The counter-surveillance supply chain

11:30am - 12:30, Room 201F
Babette Ngene, Public Interest Technology Director
Host institution: Meta

The fight against surveillance and other malicious cyber adversaries is a whole-of-society problem, requiring international norms and policies, in-depth research, platform-level defenses, investigation, and detection. This dialogue focuses on the critical first link in this counter-surveillance supply chain; the on the ground organizations around the world who are the first contact for local activists and organizations dealing with targeted malware, and will include an open discussion on how to improve the global response to surveillance and surveillance-for-hire actors through a lens of local contextual knowledge and information sharing.

Day 3 (Wednesday 26 February) 

Derecho a no ser objeto de decisiones automatizadas: desafíos y regulaciones en el sector judicial

16:30 - 17:30, Room 101C
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Hiperderecho, Red en Defensa de los Derechos Digitales, Instituto Panamericano de Derecho y Tecnología

A través de este panel se analizarán casos específicos de México, Perú y Colombia para comprender las implicaciones éticas y jurídicas del uso de la inteligencia artificial en la redacción y motivación de sentencias judiciales. Con este diálogo se busca abordar el derecho a no ser objeto de decisiones automatizadas y las implicaciones éticas y jurídicas sobre la automatización de sentencias judiciales. Algunas herramientas pueden reproducir o amplificar estereotipos discriminatorios, además de posibles violaciones a los derechos de privacidad y protección de datos personales, entre otros.

Prying Open the Age-Gate: Crafting a Human Rights Statement Against Age Verification Mandates

16:30 - 17:30, Room 401 
David Greene, Senior Staff Attorney and Civil Liberties Director
Host institutions: Electronic Frontier Foundation (EFF), Open Net, Software Freedom Law Centre, EDRi

The session will engage participants in considering the issues and seeding the drafting of a global human rights statement on online age verification mandates. After a background presentation on various global legal models to challenge such mandates (with the facilitators representing Asia, Africa, Europe, US), participants will be encouraged to submit written inputs (that will be read during the session) and contribute to a discussion. This will be the start of an ongoing effort that will extend beyond RightsCon with the goal of producing a human rights statement that will be shared and endorsed broadly. 

Day 4 (Thursday 27 February) 

Let's talk about the elephant in the room: transnational policing and human rights

10:15 - 11:15, Room 201B
Veridiana Alimonti, Associate Director For Latin American Policy
Host institutions: Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto

This dialogue focuses on growing trends surrounding transnational policing, which pose new and evolving challenges to international human rights. The session will distill emergent themes, with focal points including expanding informal and formal transnational cooperation and data-sharing frameworks at regional and international levels, the evolving role of borders in the development of investigative methods, and the proliferation of new surveillance technologies including mercenary spyware and AI-driven systems. 

Queer over fear: cross-regional strategies and community resistance for LGBTQ+ activists fighting against digital authoritarianism

11:30 - 12:30, Room 101D
Paige Collings, Senior Speech and Privacy Activist
Host institutions: Access Now, Electronic Frontier Foundation (EFF), De|Center, Fight for the Future

The rise of the international anti-gender movement has seen authorities pass anti-LGBTQ+ legislation that has made the stakes of survival even higher for sexual and gender minorities. This workshop will bring together LGBTQ+ activists from Africa, the Middle East, Eastern Europe, Central Asia and the United States to exchange ideas for advocacy and liberation from the policies, practices and directives deployed by states to restrict LGBTQ+ rights, as well as how these actions impact LGBTQ+ people—online and offline—particularly in regards to online organizing, protest and movement building.

EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements

Par : Hannah Zhao
24 janvier 2025 à 19:03

Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.

In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.

As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health. As the refrain goes, whatever the need, there's an app for that. This special nature of digital devices requires courts to review warrants to search digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.

In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.

Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal constraints and a limit to the categories of data to be searched. Neither limitationsor any other limitationswere in the issued search warrant. The police should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.

As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.

Face Scans to Estimate Our Age: Harmful and Creepy AF

23 janvier 2025 à 18:56

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

The Impact of Age Verification Measures Goes Beyond Porn Sites

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Texas Is Enforcing Its State Data Privacy Law. So Should Other States.

22 janvier 2025 à 17:31

States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data without telling customers.

In its complaint, the attorney general’s office alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting unfair and deceptive practices.)

On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.

That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as we have raised repeatedly. Everyone should know when that information's being collected and where it's going.

More state regulators should follow suit and use the privacy laws on their books.

The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.

This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”

In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law. For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all companies respect our privacy.

Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured the violation. It just didn’t.

According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.

Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.

The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step

22 janvier 2025 à 16:30

The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites.

The FTC complaint zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that deserves to be stopped.

For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to consumer reporting agencies, which gather and sell consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to disable the collection of location data and driving information.

These are all important, solid steps, and these sorts of rules should apply to all carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects, and it is not always possible to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.

All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for.

Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data

Par : Lena Cohen
17 janvier 2025 à 10:59

If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced its latest policy changes. These changes, seemingly designed to appease the incoming Trump administration, included loosening Meta’s hate speech policy to allow for the targeting of LGBTQ+ people and immigrants. 

If these changes—or Meta’s long history of anti-competitive, censorial, and invasive practices—make you want to cut ties with the company, it’s sadly not as simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.

Meta’s Business Model Relies on Your Personal Data

You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue, demonstrating just how valuable your personal data is to its business. 

How Meta Harvests Your Personal Data

Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data. A 2022 investigation by The Markup found that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel. 

Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including their contact details and browsing history.

Take These Steps to Limit How Meta Profits From Your Personal Data

Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data. 

Update Your Meta Account Settings

Open your Instagram or Facebook app and navigate to the Accounts Center page. 

A screenshot of the Meta Accounts Center page.

If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for both platforms.

Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.

Once you’re on the “Accounts Center” page, make the following changes:

1) Stop Meta from targeting ads based on data it collects about you on other apps and websites: 

Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad settings depending on your location). Click the Activity information from ad partners option, then Review Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when prompted.

A screenshot of the "Activity information from ad partners" setting with the "No" option selected

2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the process.

Back on the Ad preferences page, click the Manage info tab again (called Ad settings depending on your location), then select the Ads shown outside of Meta setting, select Not allowed and then click the “X” button to close the pop-up.

Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.

A screenshot of the "Ads outside Meta" setting with the "Not allowed" option selected

3) Disconnect the data that other companies share with Meta about you from your account:

From the Accounts Center screen, click the Your information and permissions option, followed by Your activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity button. Note: This may take up to 48 hours to take effect.

Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.

A screenshot of the "Manage future activity" setting with the "Disconnect future activity" option selected

While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for other purposes. 

Install Privacy Badger to Block Meta’s Trackers

Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.

Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the only mobile browser that supports Privacy Badger. 

Limit Meta’s Tracking on Your Phone

Take these additional steps on your mobile device:

  • Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
  • Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.

The Real Solution: Strong Privacy Legislation

Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information. 

Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit.

Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton

Par : Jason Kelley
13 janvier 2025 à 16:02

The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.  

The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.  

The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.

1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.  

Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why: 

While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.  

First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.  

Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.

Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway. 

Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.  

Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.  

2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare. 

Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.  

There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy. 

3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.  

More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography. 

It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media. 

4. The Supreme Court Has Rightly Struck Down Similar Laws Before.  

In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web. 

Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

5. There is No Safe, Privacy Protecting Age-Verification Technology. 

The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act. 

* * *

 

The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law 

Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights. 

For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review. 

 

EFF Goes to Court to Uncover Police Surveillance Tech in California

Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.

EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming the information is a trade secret. We are going to court to make sure the public gets access to these records.

The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.

How did this case start?

As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.

The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.

Pen-Link then filed a lawsuit to permanently block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.

“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.

Unfortunately, these kinds of terms are not new.

EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.

Who is Pen-Link and Cobwebs Technologies?

Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.

The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount of location data sourced from large data sets. Tangles uses AI to glean intelligence from the “open, deep and dark web.” Journalists at multiple news outlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the West Virginia Fusion Center. EFF classifies this type of product as a “Third Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.

What’s next?

Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.

For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.

Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass surveillance to local police.

The case number is STK-CV-UWM-0016425. Read more here: 

EFF's Motion to Intervene
EFF's Points and Authorities
Trujillo Declaration & EFF's Cross-Petition
Pen-Link's Original Complaint
Redacted documents produced by County of San Joaquin Sheriff’s Office

❌
❌