Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate

This is the fourth instalment in a four-part blog series documenting EFF's work in Europe. You can read additional posts here: 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.   

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe.  

EFF’s Approach to Competition  

Market concentration and monopoly power among internet companies and internet access impacts many of EFF’s issues, particularly innovation, consumer privacy, net neutrality, and platform censorship. And we have said it many times: Antitrust law and rules on market fairness are powerful tools with the potential to either cement the hold of established giants over a market even more or to challenge incumbents and spur innovation and choice that benefit users. Antitrust enforcement must hit monopolists where it hurts: ensuring that anti-competitive behaviors like abuse of dominance by multi-billion-dollar tech giants come at a price high enough to force real change.  

The EU has recently shown that it is serious about cracking down on Big Tech companies with its full arsenal of antitrust rules. For example, in a high-stakes appeal in 2022, EU judges hit Google with a record fine of more than €4.13 billion for abusing its dominant position by locking Android users into its search engine (now pending before the Court of Justice). 

We believe that with the right dials and knobs, clever competition rules can complement antitrust enforcement and ensure that firms that grow top heavy and sluggish are displaced by nimbler new competitors. Good competition rules should enable better alternatives that protect users’ privacy and enhance users’ technological self-determination. In the EU, this requires not only proper enforcement of existing rules but also new regulation that tackles gatekeeper’s dominance before harm is done. 

The Digital Markets Act  

The DMA will probably turn out to be one of the most impactful pieces of EU tech legislation in history. It’s complex but the overall approach is to place new requirements and restrictions on online “gatekeepers”: the largest tech platforms, which control access to digital markets for other businesses. These requirements are designed to break down the barriers businesses face in competing with the tech giants. 

Let’s break down some of the DMA’s rules. If enforced robustly, the DMA will make it easier for users to switch services, install third party apps and app stores and have more power over default settings on their mobile computing devices. Users will no longer be steered into sticking with the defaults embedded in their devices and can choose, for example, their own default browser on Apple’s iOS. The DMA also tackles data collection practices: gatekeepers can no longer cross-combine user data or sign them into new services without their explicit consent and must provide them with a specific choice. A “pay or consent” advertising model as proposed by Meta will probably not cut it.  

There are also new data access and sharing requirements that could benefit users, such as the right of end users to request effective portability of data and get access to effective tools to this end. One section of the DMA even requires gatekeepers to make their person-to-person messaging systems (like WhatsApp) interoperable with competitors’ systems on request—making it a globally unique ex ante obligation in competition regulation. At EFF, we believe that interoperable platforms can be a driver for technological self-determination and a more open internet. But even though data portability and interoperability are anti-monopoly medicine, they come with challenges: Ported data can contain sensitive information about you and interoperability poses difficult questions about security and governance, especially when it’s mandated for encrypted messaging services. Ideally, the DMA should be implemented to offer better protections for users’ privacy and security, new features, new ways of communication and better terms of service.  

There are many more do's and don'ts in the new fairness rulebook of the EU, such as the prohibition of platforms to favour their own products and services over those of rivals in ranking, crawling and indexing (ensuring users a real choice!), along with many other measures. All these and other requirements are to create more fairness and contestability in digital markets—a laudable objective.  If done right, the DMA presents an option for a real change for technology users—and a real threat to current abusive or unfair industry practices by Big Tech. But if implemented poorly, it could create more legal uncertainty, restrict free expression, or even legitimize the status quo. It is now up to the European Commission to bring the DMA’s promises to life. 

Public Interest 

As the EU’s 2024–2029 mandate is now in full swing, it will be important to not lose sight of the big picture. Fairness rules can only be truly fair if they follow a public-interest approach by empowering users, business, and society more broadly and make it easier for users to control the technology they rely on. And we cannot stop here: the EU must strive to foster a public interest internet and support open-source and decentralized alternatives. Competition and innovation are interconnected forces and the recent rise of the Fediverse makes this clear. Platforms like Mastodon and Bluesky thrive by filling gaps (and addressing frustrations) left by corporate giants, offering users more control over their experience and ultimately strengthening the resilience of the open internet. The EU should generally support user-controlled alternatives to Big Tech and use smart legislation to foster interoperability for services like social networks. In an ideal world, users are no longer locked into dominant platforms and the ad-tech industry—responsible for pervasive surveillance and other harms—is brought under control. 

What we don’t want is a European Union that conflates fairness with protectionist industrial policies or reacts to geopolitical tensions with measures that could backfire on digital openness and fair markets. The enforcement of the DMA and new EU competition and digital rights policies must remain focused on prioritizing user rights and ensuring compliance from Big Tech—not tolerating malicious (non)compliance tactics—and upholding the rule of law rather than politicized interventions. The EU should avoid policies that could lead to a fragmented internet and must remain committed to net neutrality. It should also not hesitate to counter the concentration of power in the emerging AI stack market, where control over infrastructure and technology is increasingly in the hands of a few dominant players. 

EFF will be watching. And we will continue to fight to save the internet in Europe, ensuring that fairness in digital markets remains rooted in choice, competition, and the right to innovate. 

Podcast Episode Rerelease: Dr. Seuss Warned Us

Par : Josh Richman
23 mars 2025 à 12:42

This episode was first released on May 2, 2023.

We’re excited to announce that we’re working on a new season of How to Fix the Internet, coming in the next few months! But today we want to lift up an earlier episode that has particular significance right now. In 2023, we spoke with our friend Alvaro Bedoya, who was appointed as a Commissioner for the Federal Trade Commission in 2022. In our conversation, we talked about his work there, about why we need to be wary of workplace surveillance, and why it’s so important for everyone that we strengthen our privacy laws. We even talked about Dr. Seuss!

Last week the Trump administration attempted to terminate Alvaro, along with another FTC commissioner, even though Alvaro's appointment doesn't expire until 2029. The law is clear: The president does not have the power to fire FTC commissioners at will. The FTC’s focus on protecting privacy has been particularly important over the last five years; with Alvaro's firing, the Trump Administration has stepped far away from that needed focus to protect all of us as users of digital technologies.

We hope you’ll take some time to listen to this May 2023 conversation with Alvaro about the better digital world he’s been trying to build through his work at the FTC and his previous work as the founding director of the Center on Privacy & Technology at Georgetown University Law Center.

Dr. Seuss wrote a story about a Hawtch-Hawtcher Bee-Watcher whose job it is to watch his town’s one lazy bee, because “a bee that is watched will work harder, you see.” But that doesn’t seem to work, so another Hawtch-Hawtcher is assigned to watch the first, and then another to watch the second... until the whole town is watching each other watch a bee.

play
Privacy info. This embed will serve content from simplecast.com

 Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

To Federal Trade Commissioner Alvaro Bedoya, the story—which long predates the internet—is a great metaphor for why we must be wary of workplace surveillance, and why we need to strengthen our privacy laws. Bedoya has made a career of studying privacy, trust, and competition, and wishes for a world in which we can do, see, and read what we want, living our lives without being held back by our identity, income, faith, or any other attribute. In that world, all our interactions with technology —from social media to job or mortgage applications—are on a level playing field. 

Bedoya speaks with EFF’s Cindy Cohn and Jason Kelley about how fixing the internet should allow all people to live their lives with dignity, pride, and purpose.

In this episode, you’ll learn about: 

  • The nuances of work that “bossware,” employee surveillance technology, can’t catch. 
  • Why the Health Insurance Portability Accountability Act (HIPAA) isn’t the privacy panacea you might think it is. 
  • Making sure that one-size-fits-all privacy rules don’t backfire against new entrants and small competitors. 
  • How antitrust fundamentally is about small competitors and working people, like laborers and farmers, deserving fairness in our economy. 

Alvaro Bedoya was nominated by President Joe Biden, confirmed by the U.S. Senate, and sworn in May 16, 2022 as a Commissioner of the Federal Trade Commission; his term expires in 2029. Bedoya was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. He has been influential in research and policy at the intersection of privacy and civil rights, and co-authored a 2016 report on the use of facial recognition by law enforcement and the risks that it poses. He previously served as the first Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law after its founding in 2011, and as Chief Counsel to former U.S. Sen. Al Franken (D-MN); earlier, he was an associate at the law firm WilmerHale. A naturalized immigrant born in Peru and raised in upstate New York, Bedoya previously co-founded the Esperanza Education Fund, a college scholarship for immigrant students in the District of Columbia, Maryland, and Virginia. He also served on the Board of Directors of the Hispanic Bar Association of the District of Columbia. He graduated summa cum laude from Harvard College and holds a J.D. from Yale Law School, where he served on the Yale Law Journal and received the Paul & Daisy Soros Fellowship for New Americans.

Transcript

ALVARO BEDOYA
One of my favorite Dr. Seuss stories is about this town called Hawtch Hawtch. So, in the town of Hawtch Hawtch, there's a town bee and you know, they presumably make honey, but the Hawtch Hawtcher one day realize that the bee that is watched will work harder you see? And so they hire a Hawtch Hawtcher to be on bee watching watch, but then you know, the bee isn't really doing much more than it normally is doing. And so they think, oh, well, the Hawtch Hawtcher is not watching hard enough. And so they hire another hot hocher to be on bee watcher watcher watch, I think is what Dr. Seuss calls it. And so there's this wonderful drawing of 12 Hawtch Hawtchers, you know, each one and either watching, watching watch, or actually, you know, the first one's watching the bee and, and the whole thing is just completely absurd.

CINDY COHN
That’s FTC Commissioner Alvaro Bedoya describing his favorite Dr. Seuss story – which he says works perfectly as a metaphor for why we need to be wary of workplace surveillance, and strengthen our privacy laws.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley. EFF’s Associate Director of Digital Strategy. This is our podcast, How to Fix the Internet.

Our guest today is Alvaro Bedoya. He’s served as a commissioner for the Federal Trade Commission since May of 2022, and before that he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. So he thinks a lot about many of the issues we’re also passionate about at EFF – trust, privacy, competition, for example – and about how these issues are all deeply intertwined

CINDY COHN
We decided to start with our favorite question: What does the world look like if we get this stuff right?

ALVARO BEDOYA
For me, I think it is a world where you wake up in the morning, live your life and your ability to do what you want to do. See what you wanna see. Read what you wanna read and live the life that you want to live is unconnected to who you are in a good way.

In other words, what you look like, what side of the tracks you're from, how much money you have. Your gender, your gender identity, your sexuality, your religious beliefs, that those things don't hold you down in any way, and that you can love those things and have those things be a part of your life. But that they only empower you and help you. I think it's also a world… we see the great parts of technology. You know, one of the annoying things of having worked in privacy for so long is that you're often in this position where you have to talk about how technology hurts people. Technology can be amazing, right?

Mysterious, wonderful, uh, empowering. And so I think this is a world where those interactions are defined by those positive aspects of technology. And so for me, when I think about where those things go wrong, sorry, falling into old tropes here, but thinking about it positively, increasingly, people are applying for jobs online. They're applying for mortgages online. They are doing all these capital letter decisions that are now mediated by technology.

And so this world is also a world where, again, you are treated fairly in those decisions and you don't have to think twice about, hold on a second, I just applied for a loan. I just applied for a job, you know, I just applied for a mortgage. Is my zip code going to be used against me? Is my social media profile, you know, that reveals my interests gonna be used against me. Is my race gonna be used against me? In this world, none of that happens, and you can focus on preparing for that job interview and finding the right house for you and your family, finding the right rental for you and your family.

Now, I think it's also a world where you can start a small business without fear that the simple fact that you're not connected to a bigger platform or a bigger brand won't be used against you, where you have a level playing field to win people over.

CINDY COHN
I think that's great. You know, leveling the playing field is one of the original things that we were hoping, you know, that digital technologies could do. It also makes me think of that old New Yorker thing, you know, on the internet, no one knows you're a dog.

ALVARO BEDOYA
(Laughs) Right.

CINDY COHN
In some ways I think the vision is on the internet. You know, again, I don't think that people should leave the other parts of their lives behind when they go on the internet. Your identity matters, but that it doesn't, the fact that you're a dog doesn't mean you can't play. I'm probably butchering that poor cartoon too much.

ALVARO BEDOYA
No, I don't. I don't think you are, but I don't know why it did, but it reminded me of one other thing, which is in this world, you, you go to a. Whether it's at home in your basement like I am now, you know, or in your car or at an office, uh, uh, at a business. And you have a shot at working with pride and dignity where every minute of your work isn't measured and quantified. Where you have the ability to focus on the work rather than the surveillance of that work and the judgments that other people might make around that minute surveillance and, and you can focus on the work itself. I think too often people don't recognize the strangeness of the fact that when you watch tv, when you watch a streaming site, when you watch cable, when you go shopping, all of that stuff is protected by privacy law. And yet most of us spend a good part of our waking hours working and there are. Really no federal, uh, uh, worker privacy protections. That, for me is, is one of the biggest gaps in our sectoral privacy system that we've yet to confront.

But the world that you wanted me to talk about definitely is a world where you can go to work and do that work with dignity and pride, uh, without minute surveillance of everything you.

CINDY COHN
Yeah. And I think inherent in that is this, you know, this, this observation that, you know, being watched all the time doesn't work as a matter of humanity, right? It's a human rights issue to be watched all the time. I mean, that's why when they build prisons, right, it's the panopticon, right? That's where that idea comes from, is this idea that people who have lost their liberty get watched all the time.

So that has to be a part of building this better future, a space where, you know, we’re not being watched all the time. And I think you're exactly right that we kind of have this gigantic hole in people's lives, which is their work lives where it's not only that people don't have enough freedom right now, it's actually headed in the other direction. I know this is something that we think about a lot, especially Jason does at EFF.

JASON KELLEY
Yeah, I mean we, we write quite a bit about Boss Ware. We've done a variety of research into Boss Ware technology. I wonder if you could talk a little bit about maybe like some concrete examples that you've seen where that technology is sort of coming to fruition, if you will. Like it's being used more and more and, and why we need to, to tackle it, because I think a lot of people probably, uh, listening to this aren't, aren't as familiar with it as they could be.

And at the top of this episode we heard you describe your favorite Dr. Seuss tale – about the bees and the watchers, and the watchers watching the watchers, and so on to absurdity. Now can you tell us why you think that’s such an important image?

ALVARO BEDOYA
I think it's a valuable metaphor for the fact that a lot of this surveillance software may not offer as complete a picture as employers might think it does. It may not have the effect that employers think it does, and it may not ultimately do what people want it to do. And so I think that anyone who is thinking about using the software should ask hard questions about ‘is this actually gonna capture what I'm being told it will capture? Does it account for the 20% tasks of my workers' jobs?’ So, you know, there's always an 80/20 rule and so, you know, as with, as with work, most of what you do is one thing, but there's usually 20% that's another thing. And I think there's a lot of examples where that 20%, like, you know, occasionally using the bathroom right, isn't accounted for by the software. And so it looks like the employee’s slacking, but actually they're just being a human being. And so I would encourage people to ask hard questions about the sophistication of the software and how it maps onto the realities of work.

JASON KELLEY
Yeah. That's a really accurate way for people to start to think about it because I think a lot of people really feel that. Um, if they can measure it, then it must be useful.

ALVARO BEDOYA
Yes!

JASON KELLEY
In my own experience, before I worked at EFF, I worked somewhere where, eventually, a sort of boss ware type tool was installed and it had no connection to the job I was doing.

ALVARO BEDOYA
That’s interesting.

JASON KELLEY
It was literally disconnected.

ALVARO BEDOYA:
Can you share the general industry?

JASON KELLEY
It was software. I worked as a, I was in marketing for a software company and um, I was remote and it was remote way before p the pandemic. So, you know, there's sort of, I think boss ware has increased probably during the pandemic. I think we've seen that because people are worried that if you're not in the office, you're not working.

ALVARO BEDOYA
Right.

JASON KELLEY
There's no evidence, boss wear can't give evidence that that's true. It can just give evidence in, you know, whether you're at your computer –

ALVARO BEDOYA
Right. Whether you're typing.

JASON KELLEY
Whether you're typing. Yeah. And what happened in my scenario without going into too much detail was that it mattered what window I was in. and it didn't always, at first it was just like, are you at your computer for eight hours? And then it was, are you at your computer in these specific windows for eight hours? And then it was, are you typing in those specific windows for eight hours? The screws kept getting twisted, right, until I was actually at my computer for 12 hours to get eight hours of ‘productive’ work in, as it was called.

And so, yeah, I left that job. Obviously, I work at EFF now for a reason. And is was one of the things that I remember when I started at EFF, part of what I like about what we do is that we think about people's humanity in what they're doing and how that interacts with technology.

And I think boss ware is one of those areas where it doesn't, um, because it, it is so common for an employer to sort of disengage from the employee and sort of think of them as like a tool. It's, it's an area where it's easy for to install something or try to install something where that happens. So I'm glad you're working on it. It's definitely an issue.

ALVARO BEDOYA
Well, I'm thinking about it, you know, and it's certainly something I, I care about and, and I think, I think my hope is, My hope is that, um, you know, the pandemic was horrific. Is horrific. My hope is that one of the realizations coming out of it from so many people going remote is the realization that particularly for some jobs, you know, uh, um, a lot of us are lucky to have these jobs where a lot of our time turns.

Being able to think clearly and carefully about a, about something, and that's a luxury. Um, but particularly for those jobs, my, my suspicion is for an even broader range of jobs that this idea of a workday where you sit down, work eight hours and sit up, you know, and, and that is the ideal workday I don't think that's a maximally productive day, and I think there's some really interesting trials around the four-day work week, and my hope is that, you know, when my kids are older, that there will be a recognition that working harder, staying up later, getting up earlier, is not the best way to get the best work from people. And people need time to think. They need time to relax. They need time to process things. And so that is my hope that that is one of the realizations around it. But you're exactly right, Jason, is that one of my concerns around this software is that there's this idea that if it can be measured, it must be important. And I think you use a great example, speaking in general here, that of software that may presume that if you aren't typing, you're not working, or if you're not in a window, you're not working, when actually you might be doing the most important work. You know, jotting down notes, organizing your thoughts, that lets you do the best stuff as it were.

Music transition

JASON KELLEY
I want to jump in for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

Now back to our conversation with Alvaro Bedoya.

CINDY COHN
Privacy issues are of course near and dear to our hearts at EFF and I know that's really the world you come out of as well. Although your perch is a little, a little different right now. We came to the conclusion that we can't address privacy if we don't address competition and antitrust issues. And I think you've come someplace similar perhaps, and I'd love for you to talk about how you think privacy and questions around competition and antitrust intertwine.

ALVARO BEDOYA
So I will confess, I don't know if I have figured it out, but I can offer a few thoughts. First of all, I think that a lot of the antitrust claims are not what they seem to be. When companies talk about how important it is to have gatekeeping around app stores because of privacy and this is one of the reasons I support the bills, I think it's Blumenthal Blackburn bill to, um, to change the way app stores are, are run and, and, and kick the tires on that gatekeeping model because I am skeptical about a lot of those pro-privacy, anti-antitrust claims, that is one thing. On the other hand, I do think we need to think carefully about the rules that are put in place, backfiring against new entrants and small competitors. And I think a lot of legislators and policy makers in the US and Europe appreciate this and are getting this right and institute a certain set of rules for bigger companies and different ones for smaller ones, I think one of the ways this can go wrong is when it's just about the size of the company rather than the size of the user base.

I think that if you are, you know, suddenly of a hundred million users that you're not a small company, even if you have, you know, a small number of employees, but I, I do think that those concerns are real and that that policy makers and people in my role need to think about the costs of privacy compliance in a way that does not inadvertently create an unlevel playing field for, for small competitors.

I will confess that sometimes things that appear to be, uh, um, antitrust problems are privacy problems in that they reflect legal gaps around the sectoral privacy framework that unfortunately has yet to be updated. So I think I can give one example where there was the recent merger of, uh, Amazon and One Medical, and, well, I can't go into the antitrust analysis that may or may not have occurred at the commission. I wrote a statement on the completion of the merger, which highlighted a gap that we have around the anonymization rule in our health privacy law. For example, people think that HIPAA is actually the Health Information Privacy Act. It's not, it's actually the Health Insurance Portability Accountability Act. And I think that little piece of common wisdom speaks to a broader gap in our understanding of health privacy. So I think a lot of people think HIPAA will protect their data and that it won't be used in other ways by their doctor, by whoever it is that has their HIPAA protected data. Well, it turns out that in 2000 when HHS promulgated. The privacy rule in good faith, it had a provision that said, Hey, look, we want to encourage the improvement in health services. We want to encourage health research and we want to encourage public health. And so we're gonna say that if you remove these, you know, 18 identifiers from health data, that it can be used for other purposes and if you look at the rule that was issued, the justification for it is that they want to promote public health.

Unfortunately, they did not put a use restriction on that. And so now, if any, doctor's practice, anyone covered by HIPAA, and I'm not gonna go into the rabbit hole of who is and who isn't, but if you're covered by HIPAA, All they need to do is remove those identifiers from the data.

And HHS is unfortunately very clear that you can essentially do a whole lot of things that have nothing to do with healthcare as long as you do that. And what I wrote in my statement is that would surprise most consumers. Frankly, it surprised me when I connected the dots.

CINDY COHN
What I'm hearing here, which I think is really important is, first of all, we start off by thinking that some of our privacy problems are really due to antitrust concerns, but what we learn pretty quickly when we're looking at this is, first of all, privacy is used frankly, as a blocker for common sense reforms that we might need, that these giants come in and they say, well, we're gonna protect people's privacy by limiting what apps are in the app store. And, and we need to look closely at that because it doesn't seem to be necessarily true.

So first of all, you have to watch out for the kind of fake privacy argument or the argument that the tech giants need to be protected because they're protecting our privacy and we need to really interrogate that. And at the bottom of it, it often comes down to the fact that we haven't really protected people's privacy as a legal matter, right? We, we, We ground ourselves in Larry Lessig, uh, four pillars of change, right? Code, norms, laws, and markets. And you know, what they're saying is, well, we have to protect, you know, essentially what is a non-market, but the, the tech giants, that markets will protect privacy and so therefore we can't introduce more competition. And I think at the bottom of this, what we find a lot is that it's, you know, the law should be setting the baseline, and then markets can build on top of that. But we've got things a little backwards. And I think that's especially true in health. It's, it's, it's very front and center for those of us who care about reproductive justice, who are looking at the way health insurance companies are now part and parcel of other data analysis companies. And the Amazon/One Medical one is, is another one of those that unless we get the privacy law right, it's gonna be hard to get at some of these other problems.

ALVARO BEDOYA
Yeah. And those are the three things that I think a lot about first, that those propri arguments that seem to cut against, uh, competition concerns are often not what they seem.

Second, that we do need to take into account how one size fits all privacy rules could backfire in a way that hurts, uh, small companies, small competitors, uh, who are the lifeblood of, uh, innovation and employment frankly. And, and lastly, Sometimes what we're actually seeing are gaps in our sectoral privacy system.

CINDY COHN
One of the things that I know you've, you've talked about a little bit is, um, you're calling it a return to fairness, and that's specifically talking about a piece of the FTC’s authority. And I wonder if you could talk about that a little more and how you see that fitting into a, a better world.

ALVARO BEDOYA
Sure. One of the best parts of this job, um, was having this need and opportunity to immerse myself in antitrust. So as a Senate staffer, I did a little bit of work on the Comcast, uh, NBC merger against, against that merger, uh, for my old boss, Senator Franken. But I didn't spend a whole lot of time on competition concerns. And so when I was nominated, I, you know, quite literally, you know, ordered antitrust treatises and read them cover to cover.

CINDY COHN
Wonderful!

ALVARO BEDOYA
Well, sometimes it's wonderful and sometimes it's not. But in this case it was. And what you see is this complete two-sided story where on the one hand you have this really anodyne, efficiency-based description of antitrust, where it is about enforcing abstract laws and maximizing efficiency and the saying, you know antitrust is about protects competition, not competitors, and you so quickly lose sight of why we have antitrust laws and how we got them.

And so I didn't just read treatises on the law. I also read histories. And one of the things that you read and realize when you read those histories is that antitrust isn't about efficiency, antitrust is about people. And yes, it's about protecting competition, but the reason we have it is because of what happened to certain people. And so, you know, the Sherman Act, you listen to those floor debates, it is fascinating because first of all, everyone agrees as to what we want to do, what Congress wanted to do. Congress wanted to reign in the trust they wanted to reign in John Rockefeller, JP Morgan, the beef trust, the sugar trust, the steel trust. Not to mention, you know, the Rockefeller's Oil Trust. The most common concern on the floor of the Senate was what was happening to cattlemen because of concentration in meat packing plants and the prices they were getting when they brought their cattle to processors, and to market. And then you look at, uh, 1914, the Clayton Act again. There was outrage, true outrage about how those antitrust laws, you know, 10 out of the first 12 antitrust injunctions in our, in our country post-Sherman, were targeted at workers and not just any workers. They were targeted at rail car manufacturers in Pullman, where it was an integrated workforce and they were working extremely long hours for a pittance and wages, and they decided to strike.

And some of the first injunctions we saw in this country were used to. Their strike or how it was used against, uh, uh, I think they're called drayage men or dray men in New Orleans, port workers and dock workers in New New Orleans, who again, were working these 12 hour days for, for nothing in wages. And this beautiful thing happened in New Orleans where the entire city went on strike.

It was, I think it was 30 unions. It was like the typographical workers unions. And if you think that that refers to people typing on keyboards, it does. From the people typing on mechanical typewriters to the people, you know, unload loading ships in the dock of, in the port of New Orleans, everyone went on strike and they had this, this organization called the Amalgamated Working Men's Council. And um, and they went, they wanted a 10 hour, uh, uh, workday. They wanted overtime pay, and they wanted, uh, uh, union shops. They got two out of those three things. But, um, but I think it was the trade board was so unhappy with it that they, uh, persuaded federal prosecutors to sue under Sherman.

And it went before Judge Billings. And Judge Billings said, absolutely this is a violation of the antitrust laws. And the curious thing about Judge Billings decision is one of the first German decisions in a federal court, and he didn't cite for the proposition that the strike was a restraint on trade to restrain on trade law. He cited to much older decisions about criminal conspiracies and unions to justify his decision.

And so what I'm trying to say is over and over and over again, whenever, you know, you look at the actual history of antitrust laws, you know, it isn't about efficiency, it's about fairness. It is about how small competitors and working people, farmers, laborers, deserve a level playing field. And in 1890, 1914, 1936, 1950, this was what was front and center for Congress.

CINDY COHN
It's great to end with a deep dive into the original intent of Congress to protect ordinary people and fairness with antitrust laws, especially in this time when history and original intent are so powerful for so many judges. You know, it’s solid grounding for going forward. But I also appreciate how you mapped the history to see how that Congressional intent was perverted by the judicial branch almost from the very start.

This shows us where we need to go to set things right but also that it’s a difficult road. Thanks so much Alvaro.

JASON KELLEY
Well, it's a rare privilege to get to complain about a former employer directly to a sitting FTC commissioner. So that was a very enjoyable conversation for me. It's also rare to learn something new about Dr. Seuss and a Dr. Seuss story, which we got to do. But as far as actual concrete takeaways go from that conversation, Cindy, what did you pull away from that really wide ranging discussion?

CINDY COHN
It’s always fun to talk to Alvaro. I loved his vision of a life lived with dignity and pride as the goal of our fixed internet. I mean those are good solid north stars, and from them we can begin to see how that means that we use technology in a way that, for example, allows workers to just focus on their work. And honestly, while that gives us dignity, it also stops the kind of mistakes we’re seeing like tracking keystrokes, or eye contact as secondary trackers that are feeding all kinds of discrimination.

So I really appreciate him really articulating, you know, what are the kinds of lives we wanna have. I also appreciate his thinking about the privacy gaps that get revealed as technology changes and, and the, the story of healthcare and how HIPAA doesn't protect us in the way that we'd hoped to protect us, in part because I think HIPAA didn't start off at a very good place, but as things have shifted and say, you know, one medical is being bought by Amazon, suddenly we see that the presumption of who your insurance provider was and what they might use that information for, has shifted a lot, and that the privacy law hasn't, hasn't kept up.

So I appreciate thinking about it from, you know, both of those perspectives, both, you know, what the law gets wrong and how technology can reveal gaps in the law.

JASON KELLEY
Yeah. That really stood out for me as well, especially the parts where Alvero was talking about looking into the law in a way that he hadn't had to before. Like you say, because that is kind of what we do at EFF at least part of what we do. And it's nice to hear that we are sort of on the same page and that there are people in government doing that. There are people at EFF doing that. There are people all over, in different areas doing that. And that's what we have to do because technology does change so quickly and so much.

CINDY COHN
Yeah, and I really appreciate the deep dive he's done into antitrust law and, and revealing really the, the, the fairness is a deep, deep part of it. And this idea that it's only about efficiency and especially efficiency for consumers only. It's ahistorical. And that's a good thing for us all to remember since we, especially these days have a Supreme Court that is really, you know, likes history a lot and grounds and limits what it does in history. The history's on our side in terms of, you know, bringing competition law, frankly, to the digital age.

JASON KELLEY
Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member or donate, or look at hoodies, t-shirts, hats or other merch.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

MUSIC CREDITS

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:

Lost track by airtone
Common ground by airtone
Probably shouldn’t by J Lang

How Do You Solve a Problem Like Google Search? Courts Must Enable Competition While Protecting Privacy.

Par : Mitch Stoltz
20 mars 2025 à 18:28

Can we get from a world where Google is synonymous with search to a world  where other search engines have a real chance to compete? The U.S. and state governments’ bipartisan antitrust suit, challenging the many ways that Google has maintained its search monopoly, offers an opportunity.

Antitrust enforcers have proposed a set of complementary remedies, from giving users a choice of search engine, to forcing Google to spin off Chrome and possibly Android into separate companies. Overall, this is the right approach. Google’s dominance in search is too entrenched to yield to a single fix. But there are real risks to users in the mix as well: Forced sharing of people’s sensitive search queries with competitors could seriously undermine user privacy, as could a breakup without adequate safeguards.

Let’s break it down.

The Antitrust Challenge to Google Search

The Google Search antitrust suit began in 2020 under the first Trump administration, brought by the Department of Justice and 11 states. (Another 38 states filed a companion suit.) The heart of the suit was Google’s agreements with mobile phone makers, browser makers, and wireless carriers, requiring that Google Search be the default search engine, in return for revenue share payments including up to $20 billion per year that Google paid to Apple. A separate case, filed in 2023, challenged Google’s dominance in online advertising. Following a bench trial in summer 2023, Judge Amit Mehta of the D.C. federal court found Google’s search placement agreements to be illegal under the Sherman Antitrust Act, because they foreclosed competition in the markets for “general search” and “general search text advertising.”

The antitrust enforcers proposed a set of remedies in fall 2024, and filed a revised version this month, signalling that the new administration remains committed to the case. A hearing on remedies is scheduled for April.

The Obvious Fix: Ban Search Engine Exclusivity and Other Anticompetitive Agreements

The first part of the government’s remedy proposal bans Google from making the kinds of agreements that led to this lawsuit: agreements to make Google the default search engine on a variety of platforms, agreements to pre-install Google Search products on a platform, and other agreements that would give platforms an incentive not to develop a general search engine of their own. This would mean the end of Google’s pay-for-placement agreements with Apple, Samsung, other hardware makers, and browser vendors like Mozilla.

In practice, a ban on search engine default agreements means presenting users with a screen that prompts them to choose a default search engine from among various competitors. Choice screens aren’t a perfect solution, because people tend to stick with what they know. Still, research shows that choice screens can have a positive impact on competition if they are implemented thoughtfully. The court, and the technical committee appointed to oversee Google’s compliance, should apply the lessons of this research.

It makes sense that the first step of a remedy for illegal conduct should be stopping that illegal conduct. But that’s not enough on its own. Many users choose Google Search, and will continue to choose it, because it works well enough and is familiar. Also, as the evidence in this case demonstrated, the walls that Google has built around its search monopoly have kept potential rivals from gaining enough scale to deliver the best results for uncommon search queries. So we’ll need more tools to fix the competition problem.

Safe Sharing: Syndication and Search Index

The enforcers’ proposal also includes some measures that are meant to enable competitors to overcome the scale advantages that Google illegally obtained. One is requiring Google to let competitors use “syndicated” Google search results for 10 years, with no conditions or use restrictions other than “that Google may take reasonable steps to protect its brand, its reputation, and security.” Google would also have to share the results of “synthetic queries”—search terms generated by competitors to test Google’s results—and the “ranking signals” that underlie those queries. Many search engines, including DuckDuckGo, use syndicated search results from Microsoft’s Bing, and a few, like Startpage, receive syndicated results from Google. But Google currently limits re-ranking and mixing of those results—techniques that could allow competitors to offer real alternatives. Syndication is a powerful mechanism for allowing rivals the benefits of scale and size, giving them a chance to achieve a similar scale.

Importantly, syndication doesn’t reveal Google users’ queries or other personal information, so it is a privacy-conscious tool.

Similarly, the proposal orders Google to make its index – the snapshot of the web that forms the basis for its search results - available to competitors. This too is reasonably privacy-conscious, because it presumably includes only data from web pages that were already visible to the public.

Scary Sharing: Users’ “Click and Query” Data

Another data-sharing proposal is more complicated from a privacy perspective: requiring Google to provide qualified competitors with “user-side data,” including users’ search queries and data sets used to train Google's ranking algorithms. Those queries and data sets can include intensely personal details, including medical issues, political opinions and activities, and personal conflicts. Google is supposed to apply “security and privacy safeguards,” but it's not clear how this will be accomplished. An order that requires Google to share even part of this data with competitors raises the risk of data breaches, improper law enforcement access, commercial data mining and aggregation, and other serious privacy harms.

Some in the search industry, including privacy-conscious companies like DuckDuckGo, argue that filtering this “click and query” data to remove personally identifying information can adequately protect users’ privacy while still helping Google’s competitors generate more useful search results. For example, Google could share only queries that were used by some number of unique users. This is the approach Google already takes to sharing user data under the European Union’s Digital Markets Act, though Google sets a high threshold that eliminates about 97% of the data. Other rules that could apply are excluding strings of numbers that could be Social Security or other identification numbers, and other patterns of data that may be sensitive information.

But click and query data sharing still sets up a direct conflict between competition and privacy. Google, naturally, wants to share as little data as possible, while competitors will want more. It’s not clear to us that there’s an optimal point that both protects users’ privacy well and also meaningfully promotes competition. More research might reveal a better answer, but until then, this is a dangerous path, where pursuing the benefits of competition for users might become a race to the bottom for users’ privacy.

The Sledgehammer: Splitting off Chrome and Maybe Android

The most dramatic part of the enforcers’ proposal calls for an order to split off the Chrome browser as a separate company, and potentially also the Android operating system. This could be a powerful way to open up search competition. An independent Chrome and Android could provide many opportunities for users to choose alternative search engines, and potentially to integrate with AI-based information location tools and other new search competitors. A breakup would complement the ban on agreements for search engine exclusivity by applying the same ban to Chrome and Android as to iOS and other platforms.

The complication here is that a newly independent Chrome or Android might have an incentive to exploit users’ privacy in other ways. Given a period of exclusivity in which Google could not offer a competing browser or mobile operating system, Chrome and Android could adopt a business model of monetizing users’ personal data to an even greater extent than Google. To prevent this, a divestiture (breakup) order would also have to include privacy safeguards, to keep the millions of Chrome and Android users from facing an even worse privacy landscape than they do now.

The DOJ and states are pursuing a strong, comprehensive remedy for Google’s monopoly abuses in search, and we hope they will see that effort through to a remedies hearing and the inevitable appeals. We’re also happy to see that the antitrust enforcers are seeking to preserve users’ privacy. To achieve that goal, and keep internet users’ consumer welfare squarely in sight, they should proceed with caution on any user data sharing, and on breakups.

EFF to NSF: AI Action Plan Must Put People First

Par : Rory Mir
13 mars 2025 à 18:53

This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).

EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.

Government Automated Decision Making

US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.

These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.

Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.

Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.

Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.

For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.

This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments here.

Decentralization Reaches a Turning Point: 2024 in review

Par : Rory Mir
1 janvier 2025 à 10:39

The steady rise of decentralized networks this year is transforming social media.  Platforms like Mastodon, Bluesky, and Threads are still in their infancy but have already shown that when users are given options, innovation thrives and it results in better tools and protections for our rights online. By moving towards a digital landscape that can’t be monopolized by one big player, we also see broader improvements to network resiliency and user autonomy.

The Steady Rise of Decentralized Networks

Fediverse and Threads

The Fediverse, a wide variety of sites and services most associated with Mastodon, continued to evolve this year. Meta’s Threads began integrating with the network, marking a groundbreaking shift for the company. Only a few years ago EFF dreamed of the impact an embrace of interoperability would have for a company that is notorious for building walled gardens that trap users within its platforms. By allowing Threads users to share their posts with Mastodon and the broader fediverse (and therefore, Bluesky) without leaving their home platform, Meta is introducing millions to the benefits of interoperability. We look forward to this continued trajectory, and for a day when it is easy to move to or from Threads, and still follow and interact with the same federated community. 

Threads’ enormous user base—100 million daily active users—now dwarfs both Mastodon and Bluesky. Its integration into more open networks is a potential turning point in popularizing the decentralized social web. However, Meta’s poor reputation on privacy, moderation, and censorship, drove many Fediverse instances to preemptively block Threads, and may fragment the network..

We explored how Threads stacks up against Mastodon and Bluesky, across moderation, user autonomy, and privacy. This development highlights the promise of decentralization, but it also serves as a reminder that corporate giants may still wield outsized influence over ostensibly open systems.

Bluesky’s Explosive Growth

While Threads dominated in sheer numbers, Bluesky was this year’s breakout star. At the start of the year, Bluesky had fewer than 200,000 users and was still invite-only.  In the last few months of 2024 however the project experienced over 500% growth in just one month, and ultimately reached over 25 million users. 

Unlike Mastodon, which integrates into the Fediverse, Bluesky took a different path, building its own decentralized protocol (AT Protocol) to ensure user data and identities remain portable and users retain a “credible exit.” This innovation allows users to carry their online communities across platforms seamlessly, sparing them the frustration of rebuilding their community. Unlike the Fediverse, Bluesky has prioritized building a drop-in replacement for Twitter, and is still mostly centralized. Bluesky has a growing arsenal of tools available to users, embracing community creativity and innovation. 

While Bluesky will be mostly familiar to former Twitter users, we ran through some tips for managing your Bluesky feed, and answered some questions for people just joining the platform.

Competition Matters

Keeping the Internet Weird

The rise of decentralized platforms underscores the critical importance of competition in driving innovation. Platforms like Mastodon and Bluesky thrive because they fill gaps left by corporate giants, and encourage users to find experiences which work best for them. The traditional social media model puts up barriers so platforms can impose restrictive policies and prioritize profit over user experience. When the focus shifts to competition and a lack of central control, the internet flourishes.

Whether a user wants the community focus of Mastodon, the global megaphone of Bluesky, or something else entirely, smaller platforms let people build experiences independent of the motives of larger companies. Decentralized platforms are ultimately most accountable to their users, not advertisers or shareholders.

Making Tech Resilient

This year highlighted the dangers of concentrating too much power in the hands of a few dominant companies. A major global IT outage this summer starkly demonstrated the fragility of digital monocultures, where a single point of failure can disrupt entire industries. These failures underscore the importance of decentralization, where networks are designed to distribute risk, ensuring that no single system compromise can ripple across the globe.  

Decentralized projects like Meshtastic, which uses radio waves to provide internet connectivity in disaster scenarios, exemplify the kind of resilient infrastructure we need. However, even these innovations face threats from private interests. This year, a proposal from NextNav to claim the 900 MHz band for its own use put Meshtastic’s experimentation—and by extension, the broader potential of decentralized communication—at risk. As we discussed in our FCC comments, such moves illustrate how monopolistic power not only stifles competition but also jeopardizes resilient tools that could safeguard peoples' connectivity. 

Looking Ahead

This year saw meaningful strides toward building a decentralized, creative, and resilient internet for 2025. Interoperability and decentralization will likely continue to expand. As it does, EFF will be vigilant, watching for threats to decentralized projects and obstacles to the growth of open ecosystems.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review

For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe. 

The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content. 

For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.  

As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.  

The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.  

The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.  

Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem. 

As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?

Fairness Regulation: Ambition and High-Stakes Clashes 

There’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true. 

Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.  

Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European. 

In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise. 

The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent. 

In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again. 

Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.  

For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”

Platform regulation: A delicate balance 

For regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?  

EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.  

In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance! 

But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship. 

In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU. 

Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.  

The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

What You Should Know When Joining Bluesky

Par : Rory Mir
18 décembre 2024 à 12:51

Bluesky promises to rethink social media by focusing on openness and user control. But what does this actually mean for the millions of people joining the site?

November was a good month for alternatives to X. Many users hit their balking point after two years of controversial changes turned Twitter into X, a restrictive hub filled with misinformation and hate speech. Musk’s involvement in the U.S. presidential election was the last straw for many who are now looking for greener pastures.

Threads, the largest alternative, grew about 15% with 35 million new users. However, the most explosive growth came from Bluesky, seeing over 500% growth and a total user base of over 25 million users at the time of writing.

We’ve dug into the nerdy details of how Mastodon, Threads, and Bluesky compare, but given this recent momentum it’s important to clear up some questions for new Bluesky users, and what this new approach to the social web really means for how you connect with people online.

Note that Bluesky is still in an early stage, and many big changes are anticipated from the project. Answers here are accurate as of the time of writing, and will indicate the company’s future plans where possible.

Is Bluesky Just Another Twitter?

At face value the Bluesky app has a lot of similarities to Twitter prior to becoming X. That’s by design: the Bluesky team has prioritized making a drop-in replacement for 2022 Twitter, so everything from the layout, posting options, and even color scheme will feel familiar to users familiar with that site. 

While discussed in the context of decentralization, this experience is still very centralized like traditional social media, with a single platform controlled by one company, Bluesky PBLLC. However, a few aspirations from this company make it stand out: 

  1. Prioritizing interoperability and community development: Other platforms frequently get this wrong, so this dedication to user empowerment and open source tooling is commendable. 
  2. “Credible Exit” Decentralization: Bluesky the company wants Bluesky, the network, to be able to function even if the company is eliminated or ‘enshittified.’

The first difference is evident already from the wide variety of tools and apps on the network. From blocking certain content to highlighting communities you’re a part of, there are a lot of settings to make your feed yours— some of which we walked through here. You can also abandon Bluesky’s Twitter-style interface for an app like Firesky, which presents a stream of all Bluesky content. Other apps on the network can even be geared towards sharing audio, events, or work as a web forum, all using the same underlying AT protocol. This interoperable and experimental ecosystem parallels another based on the ActivityPub protocol, called “The Fediverse”, which connects Threads to Mastodon as well as many other decentralized apps which experiment with the functions of traditional social media sites.

That “credible exit” priority is less immediately visible, but explains some of the ways Bluesky looks different. The most visible difference is that usernames are domain names, with the default for new users being a subdomain of bsky.social. EFF set it up so that our account name is our website, @eff.org, which will be the case across the Bluesky network, even if viewed with different apps. Comparable to how Mastodon handles verification, no central authority or government documents are needed for verification, just proof of control over a site or record.

As Bluesky decentralizes, it is likely to diverge more from the Twitter experience as the tricky problems of decentralization creep in. 

How Is Bluesky for Privacy?

While Bluesky is not engaged in surveillance-based advertising like many incumbent social media platforms, users should be aware that shared information is more public and accessible than they might expect.

Bluesky, the app, offers some sensible data-minimizing defaults like requiring user consent for third-party embedded media, which can include tracking. The real assurance to users, however, is that even if the flagship apps were to become less privacy protective, the open tools let others make full-featured alternative apps on the same network.

However, by design, Bluesky content is fully public on the network. Users can change privacy settings to encourage apps on the network to require login to view your account, but it is optional to honor. Every post, every like, and every share is visible to the world. Even blocking data is plainly visible. By design all of this information is also accessible in one place, as Bluesky aims to be the megaphone for a global audience Twitter once was.

This transparency extends to how Bluesky handles moderation, where users and content are labeled by a combination of Bluesky moderators, community moderators, and automated labeling. The result is information about you will, over time, be held by these moderators to either promote or hide your content.

Users leaving X out of frustration for the platform using public content to feed AI training may also find that this approach of funneling all content into one stream is very friendly to scraping for AI training by third parties.  Bluesky’s CEO has been clear the company will not engage in AI licensing deals, but it’s important to be clear this is inherent to any network prioritizing openness. The freedom to use public data for creative expression, innovation, and research extends to those who use it to train AI.

Users you have blocked may also be able to use this public stream to view your posts without interacting with you. If your threat model includes trolls and other bad actors who might reshare your posts in other contexts, this is important to consider.

Direct messages are not included in this heap of public information. However they are not end-to-end encrypted, and only hosted by Bluesky servers. As was the case for X, that means any DM is visible to Bluesky PBLLC. DMs may be accessed for moderation, for valid police warrants, and may even one day be public through a data breach. Encrypted DMs are planned, but we advise sensitive conversations be moved to dedicated fully encrypted conversations.

How Do I Find People to Follow?

Tools like Skybridge are being built to make it easier for people to import their Twitter contacts into Bluesky. Similar to advice we gave for joining Mastodon, keep in mind these tools may need extensive account access, and may need to be re-run as more people switch networks.

Bluesky has also implemented “starter packs,” which are curated lists of users anyone can create and share to new users. EFF recently put together a few for you to check out:

Is Bluesky In the Fediverse?

Fediverse” refers to a wide variety of sites and services generally communicating with each other over the ActivityPub protocol, including Threads, Mastodon, and a number of other projects. Bluesky uses the AT Protocol, which is not currently compatible with ActivityPub, thus it is not part of “the fediverse.”

However, Bluesky is already being integrated into the vision of an interoperable and decentralized social web. You can follow Bluesky accounts from the fediverse over RSS. A number of mobile apps will also seamlessly merge Bluesky and fediverse feeds and let you post to both accounts. Even with just one Bluesky or fediverse account, users can also share posts and DMs to both networks using a project called Bridgy Fed.

In recent weeks this bridging also opened up to the hundreds of millions of Threads users. It just requires an additional step of enabling fediverse sharing, before connecting to the fediverse Bridgy Fed account.  We’re optimistic that all of these projects will continue to improve integrations even more in the future.

Is the Bluesky Network Decentralized?

The current Bluesky network is not decentralized. 

It is nearly all made and hosted by one company, Bluesky PBLLC, which is working on creating the “credible exit” from their control as a platform host. If Bluesky the company and the infrastructure it operates disappeared tonight, however, the entire Bluesky network would effectively vanish along with it.

Of the 25 million users, only 10,000 are hosted by a non-Bluesky services — most of which through fediverse connections. Changing to another host is also currently a one-way exit.  All DMs rely on Bluesky owned servers, as does the current system for managing user identities, as well as the resource-intensive “Relay” server aggregating content from across the network. The same company also handles the bulk of moderation and develops the main apps used by most users. Compared to networks like the fediverse or even email, hosting your own Bluesky node currently requires a considerable investment.

Once this is no longer the case, a “credible exit” is also not quite the same as “decentralized.” An escape hatch for particularly dire circumstances is good, but it falls short of the distributed power and decision making of decentralized networks. This distinction will become more pressing as the reliance on Bluesky PBLLC is tested, and the company opens up to more third parties for each component of the network. 

How Does Bluesky Make Money?

The past few decades have shown the same ‘enshittification’ cycle too many times. A new startup promises something exciting, users join, and then the platform turns on users to maximize profits—often through surveillance and restricting user autonomy. 

Will Bluesky be any different? From the team’s outlined plan we can glean that Bluesky promises not to use surveillance-based advertising, nor lock-in users. Bluesky CEO Jay Graber also promised to not sell user content to AI training licenses and intends to always keep the service free to join. Paid services like custom domain hosting or paid subscriptions seem likely. 

So far, though, the company relies on investment funding. It was initially incubated by Twitter co-founder Jack Dorsey— who has since distanced himself from the project—and more recently received 8 million and 15 million dollar rounds of funding. 

That later investment round has raised concerns among the existing userbase that Bluesky would pivot to some form of cryptocurrency service, as it was led by Blockchain Capital, a cryptocurrency focused venture capital company which also had a partner join the Bluesky board. Jay Graber committed to “not hyperfinancialize the social experience” with blockchain projects, and emphasized that Bluesky does not use blockchain.

As noted above, Bluesky has prioritized maintaining a “credible exit” for users, a commitment to interoperability that should keep the company accountable to the community and hopefully prevent the kind of “enshittification” that drove people away from X. Holding the company to all of these promises will be key to seeing the Bluesky network and the AT protocol reach that point of maturity.

How Does Moderation Work?

Our comparison of Mastodon, Threads, and Bluesky gets into more detail, but as it stands Bluesky’s moderation is similar to Twitter’s before Musk. The Bluesky corporation uses the open moderation tools to label posts and users, and will remove users from their hosted services for breaking their terms of service. This tooling keeps the Bluesky company’s moderation tied to its “credible exit” goals, giving it the same leverage any other future operator might have. It also means  Bluesky’s centralized moderation of today can’t scale, and even with a good faith effort it will run into issues.

Bluesky accounts for this by opening its moderation tools to the community. Advanced options are available under settings in the web app, and anyone can label content and users on the site. These labels let users filter, prioritize, or block content. However, only Bluesky has the power to “deplatform” poorly behaved users by removing them, either by no longer hosting their account, no longer relaying their content to other users, or both.

Bluesky aspires to censorship resistance, and part of creating a “credible exit” means reducing the company’s ability to remove users entirely. In a future with a variety of hosts and relays on the Bluesky network, removing a user looks more like removing a website from the internet—not impossible, but very difficult. Instead users will need to settle with filtering out or blocking speech they object to, and take some comfort that voices they align with will not be removed from the network. 

The permeability of Bluesky also means community tooling will need to address network abuses, like last May when a pro-Trump botnet on Nostr bridged to Bluesky via Mastodon to flood timelines. It’s possible that like in the Fediverse, Bluesky may eventually form a network of trusted account hosts and relays to mitigate these concerns.

Bluesky is still a work in progress, but its focus on decentralization, user control, and interoperability makes it an exciting space to watch. Whether you’re testing the waters or planning a full migration, these insights should help you navigate the platform.

No Matter What the Bank Says, It's YOUR Money, YOUR Data, and YOUR Choice

30 octobre 2024 à 08:16

The Consumer Finance Protection Bureau (CFPB) has just finalized a rule that makes it easy and safe for you to figure out which bank will give you the best deal and switch to that bank, with just a couple of clicks. 

We love this kind of thing: the coolest thing about a digital world is how easy it is to switch from product or service to another—in theory. Digital tools are so flexible, anyone who wants your business can write a program to import your data into a new service and forward any messages or interactions that show up at the old service.

That's the theory. But in practice, companies have figured out how to use law - IP law, cybersecurity law, contract law, trade secrecy law—to literally criminalize this kind of marvelous digital flexibility, so that it can end up being even harder to switch away from a digital service than it is to hop around among traditional, analog ones.

Companies love lock-in. The harder it is to quit a product or service, the worse a company can treat you without risking your business. Economists call the difficulties you face in leaving one service for another the "switching costs" and businesses go to great lengths to raise the switching costs they can impose on you if you have the temerity to be a disloyal customer. 

So long as it's easier to coerce your loyalty than it is to earn it, companies win and their customers lose. That's where the new CFPB rule comes in.

Under this rule, you can authorize a third party - another bank, a comparison shopping site, a broker, or just your bookkeeping software - to request your account data from your bank. The bank has to give the third party all the data you've authorized. This data can include your transaction history and all the data needed to set up your payees and recurring transactions somewhere else.

That means that—for example—you can authorize a comparison shopping site to access some of your bank details, like how much you pay in overdraft fees and service charges, how much you earn in interest, and what your loans and credit cards are costing you. The service can use this data to figure out which bank will cost you the least and pay you the most. 

Then, once you've opened an account with your new best bank, you can direct it to request all your data from your old bank, and with a few clicks, get fully set up in your new financial home. All your payees transfer over, all your regular payments, all the transaction history you'll rely on at tax time. "Painless" is an admittedly weird adjective to apply to household finances, but this comes pretty darned close.

Americans lose a lot of money to banking fees and low interest rates. How much? Well, CFPB economists, using a very conservative methodology, estimate that this rule will make the American public at least $677 million better off, every year.

Now, that $677 million has to come from somewhere, and it does: it comes from the banks that are currently charging sky-high fees and paying rock-bottom interest. The largest of these banks are suing the CFPB in a bid to block the rule from taking effect.

These banks claim that they are doing this to protect us, their depositors, from a torrent of fraud that would be unleashed if we were allowed to give third parties access to our own financial data. Clearly, this is the only reason a giant bank would want to make it harder for us to change to a competitor (it can't possibly have anything to do with the $677 million we stand to save by switching).

We've heard arguments like these before. While EFF takes a back seat to no one when it comes to defending user security (we practically invented this), we reject the idea that user security is improved when corporations lock us in (and leading security experts agree with us).

This is not to say that a bad data-sharing interoperability rule wouldn't be, you know, bad. A rule that lacked the proper safeguards could indeed enable a wave of fraud and identity theft the likes of which we've never seen.

Thankfully, this is a good interoperability rule! We liked it when it was first proposed, and it got even better through the rulemaking process.

First, the CFPB had the wisdom to know that a federal finance agency probably wasn't the best—or only—group of people to design a data-interchange standard. Rather than telling the banks exactly how they should transmit data when requested by their customers, the CFPB instead said, "These are the data you need to share and these are the characteristics of a good standards body. So long as you use a standard from a good standards body that shares this data, you're in compliance with the rule." This is an approach we've advocated for years, and it's the first time we've seen it in the wild.

The CFPB also instructs the banks to fail safe: any time a bank gets a request to share your data that it thinks might be fraudulent, they have the right to block the process until they can get more information and confirm that everything is on the up-and-up.

The rule also regulates the third parties that can get your data, establishing stringent criteria for which kinds of entities can do this. It also limits how they can use your data (strictly for the purposes you authorize) and what they need to do with the data when that has been completed (delete it forever), and what else they are allowed to do with it (nothing). There's also a mini "click-to-cancel" rule that guarantees that you can instantly revoke any third party's access to your data, for any reason.

The CFPB has had the authority to make a rule like this since its founding in 2010, with the passage of the Consumer Financial Protection Act (CFPA). Back when the CFPA was working its way through Congress, the banks howled that they were being forced to give up "their" data to their competitors.

But it's not their data. It's your data. The decision about who you share it with belongs to you, and you alone.

Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff

29 octobre 2024 à 09:24

A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors, and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.

Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.

You love to see it.

Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores (Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get all their apps from the company's store.

This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can demand all kinds of concessions from you, in order to reach me. The most important concession is money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.

For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case, they only charge five times more than the Visa/Mastercard cartel).

Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last December, they prevailed in their case against Google.

This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to decide what to do about it.

It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The Verge

For the next three years, Google must meet the following criteria:

  • Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
  • Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as they'd install any app;
  • Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
  • Permit app vendors to tell users about other ways to pay for the things they buy in-app;
  • Permit app vendors to set their own prices.

Google is also prohibited from using its cash to fence out rivals, for example, by:

  • Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
  • Offering incentives to app vendors to avoid rival app stores;
  • Offering incentives to hardware makers to pre-install Google Play;
  • Offering incentives to hardware makers not to install rival app stores.

These provisions tie in with Google's other recent  loss; in Google v. DoJ, where the company was found to have operated a monopoly over search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers, browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that anyone who came up with a better search engine would have no hope of finding users.

What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software authors are at its mercy. By making it easy for users to switch from one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not because you're holding 3.3 billion Android users hostage."

Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a well-crafted interoperability order can cut through thorny tech problems. 

Google has vowed to appeal. They say they're being singled out, because Apple won a similar case earlier this year. It's true, a different  court got it wrong with Apple.

But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple responded to the EU by refusing to substantively comply with the law, teeing up another big, hairy battle.

In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game devs who are dolphins caught in Google's tuna net to the free press itself..

Disability Rights Are Technology Rights

24 octobre 2024 à 17:57

At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological self-determination is important for every technology user, and it’s especially important for users with disabilities.

Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the design process. 

The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how things work, they’ll never work right.

So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start, not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.

Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to ensuring that their gadgets, code and systems can’t be modified by their users.

Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.

But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists who added more accessibility features to EME.

The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe effects, and dampens them if there are. 

It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole right to do so. No one should have that power.

It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the Federal Trade Commission has signaled that it will crack down on companies that use DRM to block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

(EFF extends our thanks to Liz Henry for their help with this article.)

A Flourishing Internet Depends on Competition

Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of  “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem. 

In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.

Content Moderation 

In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.  

This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places. 

In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.  

Product Quality 

The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like. 

Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity.  It also clogged the first page of results with ads and sponsored links.  

Morals, Privacy, and Security 

There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.  

Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity 

Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data. 

You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.  

Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.  

FTC Findings on Commercial Surveillance Can Lead to Better Alternatives

8 octobre 2024 à 13:04

On September 19, the FTC published a staff report following a multi-year investigation of nine social media and video streaming companies. The report found a myriad of privacy violations to consumers stemming largely from the ad-revenue based business models of companies including Facebook, YouTube, and X (formerly Twitter) which prompted unbridled consumer surveillance practices. In addition to these findings, the FTC points out various ways in which user data can be weaponized to lock out competitors and dominate the respective markets of these companies.

The report finds that market dominance can be established and expanded by acquisition and maintenance of user data, creating an unfair advantage and preventing new market entrants from fairly competing. EFF has found that  this is not only true for new entrants who wish to compete by similarly siphoning off large amounts of user data, but also for consumer-friendly companies who carve out a niche by refusing to play the game of dominance-through-surveillance. Abusing user data in an anti-competitive manner means users may not even learn of alternatives who have their best interests, rather than the best interests of the company advertising partners, in mind.

The relationship between privacy violations and anti-competitive behavior is elaborated upon in a section of the report which points out that “data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.” In contrast with the recent United States v. Google LLC (2020) ruling, where Judge Amit P. Mehta found that the data collection practices of Google, though injurious to consumers, were outweighed by an improved user experience, the FTC highlighted a dangerous feedback loop in which privacy abuses beget further privacy abuses. We agree with the FTC and find the identification of this ‘unvirtuous cycle’ a helpful focal point for further antitrust action.

In an interesting segment focusing on the existing protections the European Union’s General Data Protection Regulation (GDPR) specifies for consumers’ data privacy rights which the US lacks, the report explicitly mentions not only the right of consumers to delete or correct the data held by companies, but importantly also the right to transfer (or port) one’s data to the third party of their choice. This is a right EFF has championed time and again in pointing out the strength of the early internet came from nascent technologies’ imminent need (and implemented ability) to play nicely with each other in order to make any sense—let alone be remotely usable—to consumers. It is this very concept of interoperability which can now be re-discovered and give users control over their own data by granting them the freedom to frictionlessly pack up their posts, friend connections, and private messages and leave when they are no longer willing to let the entrenched provider abuse them.

We hope and believe that the significance of the FTC staff report comes not only from the abuses they have meticulously documented, but the policy and technological possibilities that can follow from the willingness to embrace alternatives. Alternatives where corporate surveillance cementing dominant players based on selling out their users is not the norm. We look forward to seeing these alternatives emerge and grow.

NextNav’s Callous Land-Grab to Privatize 900 MHz

Par : Rory Mir
13 septembre 2024 à 10:52

The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. 

Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. 

EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav’s Proposed 'Band-Grab'

NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. 

Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS.

NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. 

What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

Stifling the Future of Open Communication

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band.

One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail.

This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses.

Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

FCC Must Protect the Commons

NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. 

The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it.

You can read our FCC comments here.

CrowdStrike, Antitrust, and the Digital Monoculture

Par : Rory Mir
1 août 2024 à 12:58

Last month’s unprecedented global IT failure should be a wakeup call. Decades of antitrust inaction have made many industries dangerously reliant on the same tools, making such crises inevitable. We must demand regulators break up the digital monocultures that are creating a less competitive, less safe, and less free digital world.

The Federal Trade Commission (FTC) solicited public comments last year on the state of the cloud computing market. EFF made it clear that the consolidation of service providers has created new dangers for everyone and urged the commission to encourage interoperability so customers could more easily switch and mix cloud services. Microsoft cautioned against intervention, touting the benefits of centralized cloud services for IT security.

A year later, a key cloud-based cybersecurity firm released a bug unique to Microsoft systems. Vital IT systems were disrupted for millions worldwide. 

This fragility goes beyond issues at a specific firm, it results from power being overly concentrated around a few major companies.

What Happened

The widespread and disruptive tech outage last month happened thanks to an overreliance on one particular tool, CrowdStrike's Falcon sensor software. While not a monopoly, this tool is the most popular in end-point protection platforms.

This niche service often used by companies is best understood as an antivirus tool for devices, controlled by a cloud platform. “End-point” computers run the agent with very deep system permissions to scan for security issues, and the company CrowdStrike regularly pushes remote software updates to this tool. This setup means many devices rely on a single source for their security, leveraging shared insights learned across devices. It also means that many devices share a single point of failure.

Instead of an inconvenience for a few companies, it more closely resembled a government shutdown or a natural disaster.

An early sign of this problem came last April, when a CrowdStrike update disrupted devices running Debian and Rocky Linux operating systems. Linux “end-point” devices are uncommon, let alone those running these specific distributions with CrowdStrike software. What should have been a red flag in April was instead barely a blip.

Last month CrowdStike disrupted two other operating systems with a bad update: Windows 10 and 11. This time it spurred a Y2K-like collapse of crucial computer systems around the globe. Airlines, hospitals, financial institutions, schools, broadcasters, and more were brought to a standstill as an erroneous update on CrowdStrike’s platform caused system crashes. Instead of an inconvenience for a few companies, it more closely resembled a government shutdown or a natural disaster.

Both cases had similar impacts to devices, but the later case was an absolute disaster for infrastructure because of a digital landscape dominated by a few key players. Having so many sectors rely on a handful of services for the same operating systems makes them all susceptible to the same bugs, with even systems running absurdly old versions of Windows gaining an advantage for providing some diversity.

Whatever went wrong at CrowdStrike was just a spark. Last month it ignited the powder keg of digital monocultures.

Digital Monoculture

All computers are broken. Every piece of software and hardware is just waiting to fail in unexpected ways, and while your friendly neighborhood hackers and researchers can often hold off some of the worst problems by finding and reporting them, we need to mitigate inevitable failures. A resilient and secure digital future can’t be built on hope alone.

Yet, that’s exactly what we’re doing. The US has not just tolerated but encouraged a monopolistic tech industry with too little competition in key markets. Decades of antitrust policy have been based on the wrongheaded idea that sheer size will make tech companies efficient and better able to serve customers. Instead, we have airports, hospitals, schools, financial systems, and more all reliant on the same software, vulnerable to the same bugs and hacks. We created a tech industry that is too big to fail.

The lack of diversity makes the whole ecosystem more fragile

We live in the age of the digital monoculture, where single vulnerabilities can tear through systems globally; sabotaging hospitals and city governments with ransomware; electrical systems with state-sponsored attacks; and breaching staggering amounts of private data. Name a class of device or software, and more often than not the majority of the market is controlled by a few companies—often the same ones: Android and iPhone; Windows and Mac; Gmail and Outlook; Chrome and Safari.  When it comes to endpoint security products three companies control half of the market, the largest being Microsoft and CrowdStrike.

Much like monocultures in agriculture, the lack of diversity makes the whole ecosystem more fragile. A new pest or disease can cause a widespread collapse without a backup plan. The solution, conversely, is to increase diversity in the tech market through tougher antitrust enforcement, and for organizations to make IT system diversity a priority.

Allowing an over-reliance on a shrinking number of companies like Microsoft will only ensure more frequent and more devastating harms in the future.

How we got here

Broken Antitrust

As EFF has pointed out, and argued to the FTC, antitrust has failed to address the realities of a 21st-century internet.

Viewing consumers as more than walking wallets, but as individuals who deserve to live unburdened by monopoly interests.

Since the 1980s, US antitrust has been dominated by “consumer welfare” theory, which suggests corporate monopolies are fine, and maybe even preferable, so long as they are not raising prices. Subtler economic harms of monopoly, along with harms to democracy, labor rights, and the environment are largely ignored.

 For the past several years, the FTC has pressed for a return to the original intent of antitrust law: viewing consumers as more than walking wallets, but as individuals who deserve to live unburdened by monopoly interests.

But we have a long way to go. We are still saddled with fewer and less adequate choices built on a tech industry which subsidizes consumer prices by compromising privacy and diminishing ownership through subscriptions and restrictive DRM. Today’s empires of industry exert more and more influence on our day to day life, building a greater lock-in to their monoculture. When they fail, the scale and impact rival those of a government shutdown.

We deserve a more stable and secure digital future, where an error code puts lives at risk. Vital infrastructure cannot be built on a digital monoculture.

To do this, antitrust enforcers, including the FTC, the Department of Justice (DOJ), and state attorneys general must increase scrutiny in every corner of the tech industry to prevent dangerous levels of centralization. An important first step would be to go after lock-in practices by IT vendors.

Procurement and Vendor Lock-In

Most organizations depend on their IT teams, even if that team is just the one friend who is “good with computers”. It’s quite common for these teams to be significantly under-resourced, forced to meet increasingly complex needs from the organization with a stagnant or shrinking budget.

Lock-in doubles down on a monopoly’s power and entrenches it across different markets.

This squeeze creates a need for off-the-shelf solutions that centralize that expertise among vendors and consultants. Renting these IT solutions from major companies like Microsoft or Google may be cost-effective, but it entrusts a good deal of control to those companies.

All too often however, software vendors take advantage of this dynamic. They will bundle many services for a low initial price, making an organization wholly reliant on them, and then hinder the ability of the organization to adopt alternative tools while later raising prices. This is a longstanding manipulative playbook of vendor lock-in.

Once locked in, a company will discover switching to alternatives is costly both in terms of money and effort. Say you want to switch email providers. Rather than an easy way to port over data and settings, your company will need to resort to manual efforts or expensive consultant groups. This is also often paired with selective interoperability, like having an email client work smoothly with a bundled calendar system, while a competitor’s service faces unstable or deliberately broken support.

Lock-in doubles down on a monopoly’s power and entrenches it across different markets. That is why EFF calls for interoperability to end vendor lock-in, and let IT teams choose the tools that reflect the values and priorities of their organization.

Buying or building more highly-tailored systems makes sense in a competitive market. It’s unlikely a single cloud provider will be the best at every service, and with interoperability, in-house alternatives become more viable to develop and host. Fostering more of that internal expertise can only bolster the resilience of bigger institutions.

Fallout from The Cloud

Allowing the economy and the well-being of countless people to rely on a few cloud services is reprehensible. The CrowdStrike Falcon incident is just the latest and largest in a growing list of hacks, breaches, and collapses coming to define the era. But each time everyday people endure real harms.

Each time, we see the poorest and most marginalized people face costly or even deadly consequences. A grounded flight might mean having to spend money on a hotel, and it might mean losing a job. Strained hospital capacity means fewer people receive lifesaving care. Each time these impacts further exacerbate existing inequalities, and they are happening with increasing frequency.

We must reject this as the status quo. CrowdStrike’s outage is a billion-dollar wake-up call to make antitrust an immediate priority. It's not just about preventing the next crash—it's about building a future where our digital world is as diverse and resilient as the people who depend on it.

Podcast Episode: Fighting Enshittification

Par : Josh Richman
2 juillet 2024 à 03:06

The early internet had a lot of “technological self-determination" — you could opt out of things, protect your privacy, control your experience. The problem was that it took a fair amount of technical skill to exercise that self-determination. But what if it didn’t? What if the benefits of online privacy, security, interoperability, and free speech were more evenly distributed among all internet users?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that award-winning author and EFF Special Advisor Cory Doctorow wants us to fight for. His term “enshittification” — a downward spiral in which online platforms trap users and business customers alike, treating them more and more like commodities while providing less and less value — was selected by the American Dialect Society as its 2023 Word of the Year. But, he tells EFF’s Cindy Cohn and Jason Kelley, enshittification analysis also identifies the forces that used to make companies treat us better, helping us find ways to break the cycle and climb toward a better future. 

In this episode you’ll learn about: 

  • Why “intellectual property” is a misnomer, and how the law has been abused to eliminate protections for society 
  • How the tech sector’s consolidation into a single lobbying voice helped bulldoze the measures that used to check companies’ worst impulses 
  • Why recent antitrust actions provide a glimmer of hope that megacompanies can still be forced to do better for users 
  • Why tech workers’ labor rights are important to the fight for a better internet 
  • How legislative and legal losses can still be opportunities for future change 

Cory Doctorow is an award-winning science fiction author, activist, journalist and blogger, and a Special Advisor to EFF. He is the editor of Pluralistic and the author of novels including “The Bezzle” (2024), “The Lost Cause” (2023), “Attack Surface” (2020), and “Walkaway” (2017); young adult novels including “Homeland” (2013) and “Little Brother” (2008); and nonfiction books including “The Internet Con: How to Seize the Means of Computation” (2023) and “How to Destroy Surveillance Capitalism” (2021). He is EFF's former European director and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

CORY DOCTOROW
So interop, you know, it's the idea that you don't need to buy your washing machine from the same people who sold you your clothes. You can use anyone's washing soap in that washing machine. Your dishes go in, in any dishwasher. Anyone's gas or electricity go into your car, you can bring your luggage onto any airline.
You know, there's just this kind of general presumption that things work together and sometimes that's just a kind of happy accident or a convergence where, you know, the airlines basically all said, okay, if it's bigger than seventy-two centimeters, we're probably gonna charge you an extra fee. And the luggage makers all made their luggage smaller than seventy-two centimeters, or you know, what a carry-on constitutes or whatever. Sometimes it's very formal, right? Sometimes like you go to a standards body and you're like, this is the threading gauge and size of a standard light bulb. And that means that every light bulb that you buy is gonna fit into every light bulb socket.
And you don't have to like read the fine print on the light bulb to find out if you've bought a compatible light bulb. And, sometimes it's adversarial. Sometimes the manufacturer doesn't want you to do it, right? Like, so HP wants you to spend something like $10,000 a gallon on printer ink and most of us don't want to spend $10,000 a gallon on printer ink and so out there are some people who figured out how HP printers ask a cartridge, ‘Hey, are you a cartridge that came from HP?’.
And they figured out how to get cartridges that aren't made by HP to say ‘Why yes, I am. And you know, it's not like the person buying the cartridge is confused about this. They are specifically like typing into a search engine, ‘How do I avoid paying HP $10,000 a gallon?’

CINDY COHN
That's Cory Doctorow. He's talking about all the places in our lives where, whether we call it that or not, we get to enjoy the power of interoperability.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
We spend a lot of time here at EFF warning about the things that could go wrong online -- and then of course jumping into the fray when they do go wrong. But on this show we're trying to envision what the world looks like if we start to get things right.

JASON KELLEY
Our guest today is Cory Doctorow. He is one of the world’s leading public thinkers about the digital world, as well as an author and activist. He writes both fiction and non fiction that has more ideas per page than anyone else we know.

CINDY COHN
We’re lucky enough that he’s been one of our colleagues at EFF for over 20 years and he’s one of my dearest friends. We had Cory on the podcast during our first season. I think he was our very first guest - but we thought it was time to check in again. And that’s not only because he’s so much fun to talk to, but also because the central idea he has championed for addressing the problems of platform monopolies – an idea called interoperability which we also call competitive compatibility – it’s started to get real traction in policy spaces both in the US and in Europe.
I quote Cory a lot on this show, like the idea that we don't want to go back to the good old days. We're trying to create the good new days. So I thought that it was a good place to start. What do the good new days look like in the Coryverse?

CORY DOCTOROW
So the old good internet was characterized by a very high degree of what I call like technological self-determination. Just the right to just decide how the digital tools you use work.
The problem was that it also required a high degree of technical skill. There are exceptions right. I think ad blockers are kind of our canonical exception for, you know, describing what a low-skill, high-impact element of technological self-determination is. Like more than half of all web users now run ad blockers. Doc Searls calls it the largest consumer boycott in human history.
And you don't have to be a brain surgeon or a hacker to install an ad blocker. It's just like a couple of clicks and away you go. And I think that a new good internet is one in which the benefits of technological self-determination, all the things you get beyond an ad blocker, like, you know, I'm speaking to you from a household that's running a pie hole, which is like a specialized data appliance that actually blocks ads in other things like smart TVs and apps and whatever.
I have a personal VPN that I run off my home network so that when I'm roaming - I just got back from Germany and they were blocking the port that I used for my mail server, and I could VPN into my house and get my email as though I were connected via my home - all of those things should just accrue to you with the ease that you get from an ad blocker because we can harness markets and tinkerers and cooperatives and people who aren't just making a thing to scratch their own itch, but are actually really invested in other people who aren't technically sophisticated being able to avail themselves of these tools too. That's the new good internet

CINDY COHN
I love that. I mean, you know, what is it? The future is here. It's just not evenly distributed. You just want to evenly distribute the future, and also make it simpler for folks to use.

CORY DOCTOROW
Yeah. You know, the problem of the old good internet was not the part where skilled technical practitioners didn't have to put up with nonsense from companies that didn't have their best interests at heart. Right?
The problem was that not everybody got that. Well, the good future of the internet is one in which we more evenly distribute those benefits. The bad future of the internet we're living in now is the one in which it's harder and harder, even for skilled practitioners, to enjoy those benefits.

CINDY COHN
And harder for the rest of us to get them, right? I hear two things, both as an end user, my world's gonna have a lot more choices, but good choices about things I can do to protect myself and places I can look for that help. And then as somebody who's a hacker or an innovator, you're gonna have a lot easier way to take your good idea, turn it into something and make it actually work, and then let people find it.

CORY DOCTOROW
And I think it's even more than that, right? Because I think that there's also the kind of incentives effect. You know, I'm not the world's biggest fan of markets as the best way to allocate all of our resources and solve all of our problems. But one thing that people who really believe in markets like to remind us of is that incentives matter.
And there is a kind of equilibrium in the product planning meeting where someone is always saying, ‘If we make it this bad, will someone type into a search engine, ‘How do I unrig this game?’ Because once they do that, then all bets are off, right? Think about again, back to ad blockers, right? If, if someone in the boardroom says, Hey, I've calculated that if we make these ads 20% more invasive we’ll increase our revenue per user by 2%.
Someone else who doesn't care about users necessarily, might say, yeah, but we think 20% of users will type ‘How do I block ads’ into a search engine as a result of this. And the expected revenue from that user doesn't just stay static at what we've got now instead of rising by 2%. The expected revenue from that user falls to zero forever.
We'll never make an advertising dime off of that user once they type ‘How do I block ads’ into a search engine. And so it isn't necessary even that the tools defend you. The fact that the tools might defend you changes the equilibrium, changes the incentives, changes the conduct of firms. And where it fails to do that, it then affords you a remedy.
So it's both belt and suspenders. Plan A and plan B.

JASON KELLEY
It sounds like we're veering happily towards some of the things that you've talked about lately with the term that you coined last year about the current moment in our digital world: Enshittification. I listened to your McLuhan lecture and it brought up a lot of similar points to what you're talking about now. Can you talk about this term? In brief, what does it mean, and, you know, why did the American Dialect Society call it the word of the year?

CORY DOCTOROW
Right. So I mean, the top level version of this is just that tech has these unique, distinctive technical characteristics that allow businesses to harm their stakeholders in ways that are distinct from the ways that other companies can just because like digital has got this flexibility and this fluidity.
And so it sets up this pattern that as the regulation of tech and as the competition for tech and as the force that workers provided as a check on tech's worst, worst impulses have all failed, we've got this dynamic where everything we use as a platform, and every platform is decaying in the same way, where they're shifting value first to users, to trap users inside a walled garden, and then bringing in business customers with the promise of funneling value from those users to those business customers, trapping those business customers, and then once everybody is held hostage, using that flexibility of digital tools to take that value away without releasing the users.
So even though the service is getting worse and worse for you, and it's less and less valuable to you, you still find yourself unable to leave. And you are even being actively harmed by it as the company makes it worse and worse.
And eventually it reaches a breaking point. Eventually things are so bad that we leave. But the problem is that that's like a catastrophic ending. That's the ending that, you know, everybody who loved LiveJournal had. Where they loved LiveJournal and the community really mattered to them.
And eventually they all left, but they didn't all end up in the same place. The community was shattered.
They just ended up fragmented and you can still hear people for whom LiveJournal was really important, saying like, I never got that back. I lost something that mattered to me. And so for me, the Enshittification analysis isn't just about like how do we stop companies from being bad, but it's about how we allow people who are trapped by bad companies to escape without having to give up as much as they have to give up now.

CINDY COHN
Right, and that leads right into adversarial interoperability, which is a term that I think was coined by Seth Schoen, EFF’s original staff technologist. It's an idea that you have really thought about a lot Cory and developed out. We heard you talk at the beginning of the episode, with that example about HP printers.

CORY DOCTOROW
That adversarial interoperability, it's been in our technology story for as long as we've had digital tools, because digital tools have this flexibility we've alluded to already. You know, the only kind of digital computer we can make is the Turing complete von Neumann machine.
It runs every program that's valid and that means that, you know, whenever a manufacturer has added an anti-feature or done something else abusive to their customers, someone else has been able to unlock it.
You know, when IBM was selling mainframes on the cheap and then charging a lot of money for printers and you know, keyboards and whatever, there were these things called plug compatible peripherals.
So, you know these companies they call the Seven Dwarfs, Fujitsu and all these other tech companies that we now think of as giants, they were just cloning IBM peripherals. When Apple wanted to find a way for its users to have a really good experience using Microsoft Office, which Microsoft had very steadfastly refused them and had, uh, made just this unbelievably terrible piece of software called, uh, office for the Mac that just didn't work and had all these compatibility problems, Steve Jobs just had his technologist reverse engineer Office, and they made iWork pages numbers in Keynote.
And it can read and write all the files from Excel, PowerPoint and Word. So this has always been in our story and it has always acted as a hedge on the worst impulses of tech companies.
And where it failed to act as a hedge, it created an escape valve for people who are trapped in those bad impulses. And as tech has become more concentrated, which itself is the result of a policy choice not to enforce antitrust law, which allowed companies to gobble each other up, become very, very concentrated.
It became easier for them to speak with one voice in legislative outlets. You know, when Seth coined the term adversarial interoperability, it was about this conspiracy among the giant entertainment companies to make it illegal to build a computer that they hadn't approved of called the Broadcast Flag.
And the reason the entertainment companies were able to foist this conspiracy on the tech industry, which was even then, between one and two orders of magnitude larger than the entertainment companies, is that the entertainment companies were like seven firms and they spoke with one voice and tech was a rabble.
It was hundreds of companies. We were in those meetings for the broadcast protection discussion group where you saw hundreds of companies at each other's throats not able to speak with one voice. Today, tech speaks with one voice, and they have taken those self-help measures, that adversarial interoperability, that once checked their worst impulses, and they have removed them from us.
And so we get what Jay Freeman calls felony contempt of business model where, you know, the act of reverse engineering a printer cartridge or an office suite or mobile operating system gives rise to both civil and criminal penalties and that means no one invests in it. People who do it take enormous personal risks. There isn't the kind of support chain.
You definitely don't get that kind of thing where it's like, ‘just click this button to install this thing that makes your experience better.’ To the extent that it even exists, it's like, download this mysterious software from the internet. Maybe compile it yourself, then figure out how to get it onto your device.
No one's selling you a dongle in the checkout line at Walmart for 99 cents that just jailbreaks your phone. Instead, it's like becoming initiated into the Masons or something to figure out how to jailbreak your phone.

CINDY COHN
Yes, we managed to free jailbreaking directly through the exceptions process in the DMCA but it hasn’t ended up really helping many people. We got an exception to one part of the law but the very next section prevents most people from getting any real benefit.

CORY DOCTOROW
At the risk of like teaching granny to suck eggs, we know what the deficiency in the, in the exceptions process is, right? I literally just explained this to a fact checker at the Financial Times who's running my Enshittification speech, who's like you said that it's illegal to jailbreak phones, and yet I've just found this process where they made it legal to jailbreak phones and it's like, yeah, the process makes it legal for you to jailbreak your phone. It doesn't make it legal for anyone to give you a tool to jailbreak your phone or for you to ask anyone how that tool should work or compare notes with someone about how that, so you can like, gnaw your own jailbreaking tool out of a whole log in secret, right? Discover the, discover the defect in iOS yourself.
Figure out how to exploit it yourself. Write an alternative version of iOS yourself. And install it on your phone in the privacy of your own home. And provided you never tell anyone what you've done or how you did it, the law will permit you to do this and not send you to prison.
But give anyone any idea how you're doing it, especially in a commercial context where it's, you know, in the checkout aisle at the Walmart for 99 cents, off to prison with you. Five-hundred-thousand-dollar fine and a five-year prison sentence for a first offense for violating Section 12 0 1 of the DMCA in a commercial way. Right? So, yeah, we have these exceptions, but they're mostly ornamental.

CINDY COHN
Well, I mean, I think that that's the, you know, it's profoundly weird, right? This idea that you can do something yourself, but if you help somebody else do it, that's illegal. It's a very strange thing. Of course, EFF is not like the digital Millennium Copyright Act since 1998 when it was passed, or probably 1995 when they started talking about it. But it is a situation in which, you know, we've chipped away at the law, and this is a thing that you've written a lot about. These fights are long fights and we have to figure out how to be in them for the long run and how to claim victory when we get even a small victory. So, you know, maybe this is a situation in which us crowing about some small victories, has led people to be misled about the overarching story which is still one where we've got a lot of work to do.

CORY DOCTOROW
Yeah, and I think that, you know, the way to understand this is as not just the DMCA, but also all the other things that we just colloquially call IP Law that constitute this thing that Jay calls felony contempt of business model. You know, there's this old debate among our tribe that, you know, IP is the wrong term to use. It's not really property. It doesn't crisply articulate a set of policies. Are we talking about trademark and patent and copyright, or do we wanna throw in broadcast rights and database rights and you know, whatever, but I actually think that in a business context, IP means something very, very specific.
When an investor asks a founder, ‘What IP do you have? What they mean is what laws can you invoke that will allow you to exert control over the conduct of your competitors, your critics, and your customers?’ That's what they mean. And oftentimes, each IP law will have an escape valve, like the DMCA's triennial exemptions. But you can layer one in front of the other, in front of the other in order to create something where all of the exemptions are plugged. So, you know, copyright has these exceptions but then you add trademark where like Apple is doing things like engraving nearly invisible apple logos on the components inside of its phones, so that when they're refurbished in the far east and shipped back as parts for independent repair, they ask the customs agency in the US to seize the parts for tarnishment of their trademark because the parts are now of an unknown quality and they bear their logo, which means that it will degrade the public's opinion of the reliability of an Apple product. So, you know, copyright and patent don't stop them from doing this, but we still have this other layer of IP and if you line the layers up in the right way, and this is what smart corporate lawyers do - they know the right pattern to line these different protections up, such that all of the exceptions that we're supposed to provide a public interest, that were supposed to protect us as the users or protect society - each one of those is choked off by another layer.

CINDY COHN
I think that’s one of my biggest frustrations in fixing the internet. We get stuck fighting one fight at a time and just when we pass one roadblock we have to navigate another. In fact, one that we haven’t mentioned yet is contract law, with terms of service and clickwrap license agreements that block innovation and interoperability. It starts to feel more like a game, you know, can our intrepid coder navigate around all the legal hurdles and finally get to the win where they can give us back control over our devices and tools?

CORY DOCTOROW
I mean, this is one of the things that's exciting about the antitrust action that we're getting now, is that I think we're gonna see a lot of companies being bound by obligations whose legitimacy they don't acknowledge and which they are going to flout. And when they do, presuming that the enforcers remain committed to enforcing the law, we are going to have opportunities to say to them, ‘Hey, you're gonna need to enter into a settlement that is gonna restrict your future conduct. You're gonna have to spin off certain things. You're gonna have to allow certain kinds of interop or whatever’.
That we got these spaces opening up. And this is how I think about all of this and it is very game-like, right? We have these turns. We're taking turns, our adversaries are taking turns. And what we want is not just to win ground, but we want to win high ground. We want to win ground from which we have multiple courses of action that are harder to head off. And one of the useful things about the Enshittification analysis is it tries to identify the forces that made companies treat us good. I think sometimes the companies treated us well because the people who ran them were honorable. But also you have to ask how those honorable people resisted their shareholders’ demands to shift value from the firm to their users or the other direction. What forces let them win, you know, in that fight. And if we can identify what forces made companies treat technology users better on the old good internet, then we can try and build up those forces for a new good internet. So, you know, one of the things that I think really helped the old good internet was the paradox of the worker power of the tech worker because tech workers have always been well compensated. They've always had a lot of power to walk out of the job and go across the street and get another job with someone better. Tech Workers had all of this power, which meant that they didn't ever really like form unions. Like tech union density historically has been really low. They haven't had formal power, they've had individual power, and that meant that they typically enjoyed high wages and quite cushy working conditions a lot of the time, right? Like the tech campus with the gourmet chef and the playground and the gym and the sports thing and the bicycles and whatever. But at the same time, this allowed the people they worked for to appeal to a sense of mission among these people. And it was, these were these like non-commercial ethical normative demands on the workforce. And the appeals to those let bosses convince workers to work crazy hours. Right? You know, the extremely hardcore Elon Musk demand that you sleep under your desk, right? This is where it comes from, this sense of mission which meant, for the bosses, that there was this other paradox, which was that if you motivate your workers with a sense of mission, they will feel a sense of mission. And when you say, ‘Hey, this product that you fought really hard for, you have to make worse, right? You've, you know, missed your gallbladder surgery and your mom's funeral and your kid's little league championship to make this product. We want you to stick a bunch of gross ads in it,’ the people who did that job were like, no, I feel a sense of mission. I will quit and walk across the street and get another job somewhere better if this is what you demand of me. One of the constraints that's fallen away is this labor constraint. You know, when Google does a stock buyback and then lays off 12,000 workers within a few months, and the stock buyback would pay their wages for 27 years, like the workers who remain behind get the message that the answer to, no, I refuse to make this product worse is fine, turn in your badge and don't let the door hit you in the ass on the way out. And one of the things we've always had a trade in at EFF is tech workers who really cared about their users. Right? That's been the core of our membership. Those have been the whistleblowers we sometimes hear from. Those have been our clients sometimes. And we often say when companies have their users’ backs, then we have the company's back. If we were to decompose that more fully, I think we would often find that the company that has its users' back really has a critical mass of indispensable employees who have their users’ back, that within the balance of power in the company, it's tipping towards users. And so, you know, in this moment of unprecedented union formation, if not union density, this is an area where, you know, you and I, Cindy have written about this, where, where tech rights can be workers' rights, where bossware can cut against labor rights and interoperable tools that defeat bossware can improve workers’ agency within their workplace, which is good for them, but it's good for the people that they feel responsibility for, the users of the internet.

CINDY COHN
Yeah. I remember in the early days when I first joined EFF and Adobe had had the FBI arrest Dmitri Sklyarov at DefCon because he developed a piece of software that allowed people to copy and move their Adobe eBooks into other formats and platforms. Some of EFF’s leadership went to Adobe’s offices to talk to their leadership and see if we could get them to back off.
I remember being told about the scene because there were a bunch of hackers protesting outside the Adobe building, and they could see Adobe workers watching them from the windows of that building. We knew in that moment that we were winning, that Adobe was gonna back down because their internal conversations were, how come we're the guys who are sending the FBI after a hacker?
We had something similar happen with Apple more recently when Apple announced that it was going to do client side scanning. We knew from the tech workers that we were in contact with inside the company that breaking end-to-end encryption was something that most of the workers didn't approve of. We actually flew a plane over Apple’s headquarters at One Infinite Loop to draw attention to the issue. Now whether it was the plane or not, it wasn't long before Apple backed down because they felt the pressure from inside, as well as outside. I think the tech workers are feeling disempowered right now, and it's important to keep telling these stories and reminding them that they do have power because the first thing that a boss who wants to control you does, is make you think you're all alone and you don't have any power. I appreciate that in the world we’re envisioning where we start to get tech right, we're not just talking about users and what users get. We're talking about what workers and creators and hackers and innovators get, which is much more control and the ability to say no or to say yes to something better than the thing that the company has chosen. I'm interested in continuing to try to tell these stories and have these conversations.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Cory Doctorow. Cory is well known for his writing and speaking but what some people may not realize is that he is a capital A Activist. I work with him closely on the activism team here at EFF, and I have seen firsthand how sometimes his eyes go red and he will throw everything he has into a fight. So I wanted to get him to talk a bit more about the activism side of his work, and what fuels that.

CORY DOCTOROW
I tried to escape EFF at one point. I actually was like, God, you know, the writing and the activism, I can't do both. I'm just gonna do one. And so I went off to do the writing for a few years, and I got so pissed off with things going wrong in the world that I wasn't actively engaged in trying to fix that I just lost it. And I was like, I, whatever negative effects accrue due to overwork are far less significant to me, both like intellectually and kind of emotionally, than the negative effects I get from feeling hopeless, right, and helpless and sitting on the sidelines while things that are just untenable, go on. And, you know, Cindy said it before, it's a long game, right? The activism game. We are sowing the seeds of a crop that we may never reap. And I am willing to understand and believe and make my peace with the idea that some of the stuff that I'm doing will be victories after I've left the field, right, it'll be for people who haven't even graduated high school yet, let alone going to work for EFF or one of our allies.
And so when I see red, when I get really angry, when I don't know, you know, the the DRM in browsers at the W3C or the European Union trying for, mandatory copyright filters for online services, I think like this is a fight we may not win, but it's a fight that we must fight, right? The stakes are too high not to win it, and if we lose it this time around, we will lay the groundwork for a future victory. We will create the people who are angry that the policy came out this way, who, when some opportunity opens up in the future, because you know these fights that we fight, the side that we're on is the side of producing something good and stable and beneficial. And the thing that we're fighting against has massive downstream harms, whether that's mandatory copyright filters or client-side scanning or breaking end-to-end encryption, right? Like if we lose a breaking end-to-end encryption fight, what we have lost is the safety of millions of people in whatever country that rule has been enacted, and that means that in a way that is absolutely deplorable and that the architects of these policies should be ashamed of, some of those people are gonna come to the most terrible harms in the future. And the thing that we should be doing because we have lost the fight to stop those harms from occurring, is be ready to when those harms occur, to be able to step in and say, not just we told you so, but here's how we fix it. Here's the thing that we're going to do to turn this crisis into the opportunity to precipitate a change.

JASON KELLEY
Yeah, that's right. Something that has always pleased me is when we have a guest here on the podcast and we’ve had many, who have talked about the blue ribbon campaign. And it’s clear that, you know, we won that fight, but years and years ago, we put together this coalition of people, maybe unintentionally, that still are with us today. And it is nice to imagine that, with the wins and the losses, we gain bigger numbers as we lay that groundwork.

CINDY COHN
And I think there is something also fun about trying to build the better world, being the good guys. I think there is something powerful about that. The fights are long, they're hard. I always say that, you know, the good guys throw better parties. And so on the one hand it's, yes, it's the anger; your eyes see red, we have to stop this bad thing from happening. But the other thing is that the other people who are joining with you in the fight are really good people to hang out with. And so I guess I, I wanna make sure that we're talking about both sides of a kind of activist life because they're both important. And if it wasn't for the fun part - fun when you win - sometimes a little gallows humor when you don't, that's as important as the anger side because if you're gonna be in it for the long run you can't just run on, you know, red-eyed anger alone.

CORY DOCTOROW
You know, I have this great laptop from this company Framework. I promise you this goes somewhere that, uh, is a user serviceable laptop. So it comes with a screwdriver. Even someone who's really klutzy like me can fix their laptop. And, uh, I drop my laptops all the time - and the screws had started coming loose on the bottom, and they were like, ‘hey, this sounds like a thing that we didn't anticipate when we designed it. Why don't we ship you a free one and you ship us back the broken one, we can analyze it for future ones’. So, I just did this, I swapped out the bottom cover of my laptop at the weekend, which meant that I had a new sticker surface for my laptop. And I found a save.org ‘some things are not for sale’ sticker, which was, you know, this incredible campaign that we ran with our lost and beloved colleague Elliot and putting that sticker on felt so good. You know, it was just like, yeah, this is, this is like a, this is like a head on a pike for me. This is great.

CINDY COHN
And for those who may not have followed that, just at the beginning of Covid actually, there was an effort by private equity to buy the control of the .org domain, which of course means EFF.org, but it means every other nonprofit. And we marshaled a tremendous coalition of nonprofits and others to essentially, you know, make the deal not happen. And save.org for, you know, the.orgs. And as Cory mentioned, our dear friend Elliot who was our activism director at the time, that was his last campaign before he got sick. And, we did, we, we won. We saved.org. Now that fight continues. Uh, things are not all perfect in .org land, but we did head that one off and that included a very funky, nerdy protest in front of an ICANN meeting that, uh, that a bunch of people came to.

CORY DOCTOROW
Top level domains still a dumpster fire. In other words, in other news, water's still wet. You know, the thing about that campaign that was so great, is it was one where we didn't have a path to victory. We didn't have a legal leg to stand on. The organization was just like operating in its own kind of bubble where it was fully insulated from, you know, formally, at least on paper, insulated from public opinion, from stakeholder opinions. It just got to do whatever it wanted. And we just like kind of threw everything at it. We tried all kinds of different tactics and cumulatively they worked and there were weird things that came in at the end. Like Xavier Becerra, who is then the Attorney General of California going like, well, you're kind of, you're a California nonprofit. Like, I think maybe we're gonna wanna look at this.
And then all of a sudden everyone was just like, no, no, no, no, no. But you know, it wasn't like Becerra saved it, right? It was like we built up the political pressure that caused the Attorney General of California who's got a thing or two on his plate, to kind of get up on his hind legs and go, ‘Hey, wait a second. What's going on here?’
And there've been so many fights like that over the years. You know, this is, this is the broadcast treaty at the UN. I remember when we went, our then colleague, Fred von Lohmann was like, ‘I know how to litigate in the United States 'cause we have like constitutional rights in the United States. The UN is not going to let NGOs set the agenda or sue. You can't force them to give you time.’ You know, it's like you have all the cards stacked against you there but we killed the broadcast flag and we did it like by being digitally connected with activists all over the world that allowed us to exploit the flexibility of digital tools to have a fluid improvisational style that allowed us at each turn to improvise in the moment, new tactics that went around the roadblocks that were put in our way. And some of them were surreal, like our handouts were being stolen and hidden in the toilets. Uh, but you know, it was a very weird fight.
And we trounced the most powerful corporations in the world in a forum that was completely stacked against us. And you know, that's the activist joy here too, right? It's like you go into these fights with the odds stacked against you. You never know whether or not there is a lurking potential for a novel tactic that your adversary is completely undefended on, where you can score really big, hard-to-anticipate wins. And I think of this as being related to a theory of change that I often discuss when people ask me about optimism and pessimism.
Because I don't like optimism and pessimism. I think they're both a form of fatalism. That optimism and pessimism are both the idea that the outcome of events are unrelated to human conduct, right? Things will get worse or things will get better. You just sit on the sidelines. It's coming either way. The future is a streetcar on tracks and it's going where it's going.
But I think that hope is this idea that if you're like, trying to get somewhere and you don't know how to get there, you're trying to ascend a gradient towards a better future - if you ascend that gradient to the extent that you can see the path from where you are now, that you can attain a vantage point from which you can see new possible ways of going that were obscured from where you were before, that doing something changes the landscape, changes where you're situated and may reveal something else you can do.

CINDY COHN
Oh, that's such a lovely place to end. Thank you so much, Cory, for taking time to talk with us. We're gonna keep walking that path, and we're gonna keep looking for the little edges and corners and ways, you know, that we can continue to push forward the better internet because we all deserve it.

JASON KELLEY
Thanks, Cory. It's really nice to talk to you.

CORY DOCTOROW
Oh, it was my pleasure.

JASON KELLEY
You know, I get a chance to talk to Cory more often than most people, and I'm still just overjoyed when it gets to happen. What did you think of that conversation, Cindy?

CINDY COHN
What I really liked about it is that he really grounds, you know, what could be otherwise, a kind of wonky thing - adversarial interoperability or competitive compatibility - in a list of very concrete things that have happened in the past and not the far past, the fairly recent past. And so, you know, building a better future really means just bringing some of the tools to bear that we've already brought to bear in other situations, just to our new kind of platform Enshittification world. Um, and I think it makes it feel much more doable than something that might be, you know, a pie in the sky. And then we all go to Mars and everything gets better.

JASON KELLEY
Yeah. You know, he's really good at saying, here's how we can learn from what we actually got right in the past. And that's something people don't often do in this, in this field. It's often trying to learn from what we got wrong. And the part of the conversation that I loved was just hearing him talk about how he got back into doing the work. You know, he said he wanted to do writing or activism, because he was just doing too much, but in reality, he couldn't do just one of the two because he cares so much about what's going on. It reminded me when he was saying, sort of, what gets his eyes to turn red of when we were speaking with Gaye Gordon-Byrne, about right to repair and how she had been retired and just decided after getting pulled back in again and again just to go wholly committed to to fighting for the right to repair after, you know that quote from The Godfather about being continually pulled back in. This is Cory and, and people like him, I think, to a tee.

CINDY COHN
Yeah, I think so too. That reminded me of what, what she said. And of course I was on the other side of it. I was one of the people that Cory was pinging over and over again.

JASON KELLEY
So you pulled him back in.

CINDY COHN
Well, I think he pulled himself back in. I was just standing there. Um, but, but it is, it is fun to watch somebody feel their passion grow so much that they just have to get back into the fight. And I think Gay really told that same trajectory of how, you know, sometimes something just bugs you enough that you decide, look, I gotta figure out how to get into this fight and, and, and make things better.

JASON KELLEY
And hopefully people listening will have that same feeling. And I know that, you know, many of our supporters do already.
Thanks for joining us for this episode of How to Fix the Internet. If you have any feedback or suggestions, we would be happy to hear from you. Visit EFF. org slash podcast and click on listener feedback. And while you're there, maybe you could become an EFF member and maybe you could pick up some merch. We've got very good t-shirts. Or you can just peruse to see what's happening in digital rights this week and every week. This podcast is licensed Creative Commons attribution. 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode, you heard Xena's Kiss slash Madea's Kiss by M. Wick, Probably Shouldn't by J. Lang featuring Mr. Yesterday, Come Inside by Zepp Herm featuring Snowflake, and Drops of H2O the Filtered Water Treatment by J. Lang featuring Airtone. Our theme music is by Nat Keefe of Beatmower with Reed Mathis. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I hope you'll join us again. I'm Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

How the FTC Can Make the Internet Safe for Chatbots

28 juin 2024 à 16:13

No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI.

Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.

But Chair Khan didn’t treat the question as an excuse to launch into the policymaker’s verbal equivalent of a compulsory gymnastics exhibition.

Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act (FTCA5).

Section 5 gives the FTC a broad mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices.” Chair Khan has made extensive use of these powers during her first term as chair, for example, by banning noncompetes and taking action on online privacy.

At EFF, we share many of the widespread concerns over privacy, fairness, and labor rights raised by AI. We think that copyright law is the wrong tool to address those concerns, both because of what copyright law does and doesn’t permit, and because establishing copyright as the framework for AI model-training will not address the real privacy and labor issues posed by generative AI. We think that privacy problems should be addressed with privacy policy and that labor issues should be addressed with labor policy.

That’s what made Chair Khan’s remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on. The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights.

Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”

The question of whether chatbots can be taught not to “hallucinate” is far from settled. Some industry leaders think the problem can never be solved, even as startups publish (technically impressive-sounding, but non-peer reviewed) papers claiming to have solved the problem.

Whether the problem can be solved, it’s clear that for the commercial chatbot offerings in the market today, “hallucinations” come with the package. Or, put more simply: today’s chatbots lie, and no one can stop them.

That’s a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm. It’s hard enough to attend your grandmother’s funeral without the added pain of your airline’s chatbot lying to you about the bereavement fare.

Here’s where the FTC’s powers can help the American public:

The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an “unfair and deceptive practice” that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails.

After all, if a company doesn’t get in trouble when its chatbot lies to a customer, why would they pay extra for a chatbot that has been designed not to lie? And if there’s no reason to pay extra for a chatbot that doesn’t lie, why would anyone invest in solving the “hallucination” problem?

Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace. If you can prove that your chatbot won’t lie to your customers’ users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot’s output.

But until someone does figure out how to make a “hallucination”-free chatbot, guidance promising serious consequences for chatbots that deceive users with “hallucinated” lies will push companies to limit the use of chatbots to low-stakes environments, leaving human workers to do their jobs.

The FTC has already started down this path. Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency’s thinking on how companies should present their chatbots to users.

We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions – especially if that guidance is backed up with muscular enforcement.

What’s the Difference Between Mastodon, Bluesky, and Threads?

The ongoing Twitter exodus sparked life into a new way of doing social media. Instead of a handful of platforms trying to control your life online, people are reclaiming control by building more open and empowering approaches to social media. Some of these you may have heard of: Mastodon, Bluesky, and Threads. Each is distinct, but their differences can be hard to understand as they’re rooted in their different technical approaches. 

The mainstream social web arguably became “five websites, each consisting of screenshots of text from the other four,”  but in just the last few years radical and controversial changes to major platforms were a wake up call to many and are driving people to seek alternatives to the billionaire-driven monocultures.

Two major ecosystems have emerged in the wake, both encouraging the variety and experimentation of the earlier web. The first, built on ActivityPub protocol, is called the Fediverse. While it includes many different kinds of websites, Mastodon and Threads have taken off as alternatives for Twitter that use this protocol. The other is the AT Protocol, powering the Twitter alternative Bluesky.

These protocols, a shared language between computer systems, allow websites to exchange information. It’s a simple concept you’re benefiting from right now, as protocols enable you to read this post in your choice of app or browser. Opening this freedom to social media has a huge impact, letting everyone send and receive posts their own preferred way. Even better, these systems are open to experiment and can cater to every niche, while still connecting to everyone in the wider network. You can leave the dead malls of platform capitalism, and find the services which cater to you.

To save you some trial and error, we have outlined some differences between these options and what that might mean for them down the road.

ActivityPub and AT Protocols

ActivityPub

The Fediverse goes a bit further back,  but ActivityPub’s development by the world wide web consortium (W3C) started in 2014. The W3C is a public-interest non-profit organization which has played a vital role in developing open international standards which define the internet, like HTML and CSS (for better or worse). Their commitment to ActivityPub gives some assurance the protocol will be developed in a stable and ostensibly consensus driven process.

This protocol requires a host website (often called an “instance”) to maintain an “inbox” and “outbox” of content for all of its users, and selectively share this with other host websites on behalf of the users. In this federation model users are accountable to their instance, and instances are accountable to each other. Misbehaving users are banned from instances, and misbehaving instances are cut off from others through “defederation.” This creates some stakes for maintaining good behavior, for users and moderators alike.

ActivityPub handles a wide variety of uses, but the application most associated with the protocol is Mastodon. However, ActivityPub is also integral to Meta’s own Twitter alternative, Threads, which is taking small steps to connect with the Fediverse. Threads is a totally different application, solely hosted by Meta, and is ten times bigger than the Fediverse and Bluesky networks combinedmaking it the 500-pound gorilla in the room. Meta’s poor reputation on privacy, moderation, and censorship, has driven many Fediverse instances to vow they’ll defederate from Threads. Other instances still may connect with Threads to help users find a broader audience, and perhaps help sway Threads users to try Mastodon instead.

AT Protocol

The Authenticated Transfer (AT) Protocol is newer; sparked by Twitter co-founder Jack Dorsey in 2019. Like ActivityPub, it is also an open source protocol. However, it is developed unilaterally by a private for-profit corporation— Bluesky PBLLC— though it may be imparted to a web standards body in the future. Bluesky remains mostly centralized. While it has recently opened up to small hosts, there are still some restrictions preventing major alternatives from participating. As developers further loosens control we will likely see rapid changes in how people use the network.

The AT Protocol network design doesn’t put the same emphasis on individual hosts as the Fediverse does, and breaks up hosting, distribution, and curation into distinct services. It’s easiest to understand in comparison to traditional web hosting. Your information, like posts and profiles, are held in Personal Data Servers (PDSes)—analogous to the hosting of a personal website. This content is then fetched by relay servers, like web crawlers, which aggregate a “firehose” of everyone’s content without much alteration. To sort and filter this on behalf of the user, like a “search engine,” AT has Appview services, which give users control over what they see. When accessing the Appview through a client app or website, the user has many options to further filter, sort, and curate their feed, as well as “subscribe” to filters and labels someone else made.

The result is a decentralized system which can be highly tailored while still offering global reach. However, this atomized system also may mean the community accountability encouraged by the host-centered system may be missing, and users are ultimately responsible for their own experience and moderation. This will depend on how the network opens to major hosts other than the Bluesky corporation.

User Experience

Mastodon, Threads and Bluesky have a number of differences that are not essential to their underlying protocol which affect users looking to get involved today. Mastodon and Bluesky are very customizable, so these differences are just addressing the prevalent trends.

Timeline Algorithm

Most Mastodon and most ActivityPub sites prefer a more straightforward timeline of content from accounts you follow. Threads have a Meta-controlled algorithm, like Instagram. Bluesky defaults to a chronological feed, but opens algorithmic curation and filtering up to apps and users. 

User Design

All three services present a default appearance that will be familiar to anyone who has used Twitter. Both Mastodon and Bluesky have alternative clients with the only limit being a developer’s imagination. In fact, thanks to their open nature, projects like SkyBridge let users of one network use apps built for the other (in this case, Bluesky users using Mastodon apps). Threads does not have any alternate clients and requires a developer API, which is still in beta.

Onboarding 

Threads has the greatest advantage to getting people to sign up, as it has only one site which accepts an Instagram account as a login. Bluesky also has only one major option for signing up, but has some inherent flexibility in moving your account later on. That said, diving into a few extra setup steps can improve the experience. Finally, one could easily join Mastodon by joining the flagship instance, mastodon.social. However, given the importance of choosing the right instance, you may miss out on some of the benefits of the Fediverse and want to move your account later on. 

Culture

Threads has a reputation for being more brand-focused, with more commercial accounts and celebrities, and Meta has made no secret about their decisions to deemphasize political posts on the platform. Bluesky is often compared to early Twitter, with a casual tone and a focus on engaging with friends. Mastodon draws more people looking for community online, especially around shared interests, and each instance will have distinct norms.

Privacy Considerations

Neither ActivityPub nor AT Protocol currently support private end-to-end encrypted messages at this time, so they should not be used for sensitive information. For all services here, the majority of content on your profile will be accessible from the public web. That said, Mastodon, Threads, and Bluesky differ in how they handle user data.

Mastodon

Everything you do as a user is entrusted to the instance host including posts, interactions, DMs, settings, and more. This means the owner of your instance can access this information, and is responsible for defending it against attackers and law enforcement. Tech-savvy people may choose to self-host, but users generally need to find an instance run by someone they trust.

The Fediverse muffles content sharing through a myriad of permissions set by users and instances. If your instance blocks a poorly moderated instance for example, the people on that other site will no longer be in your timelines nor able to follow your posts. You can also limit how messages are shared to further reduce the intended audience. While this can create a sense of community and closeness,  remember it is still public and instance hosts are always part of the equation. Direct messages, for example, will be accessible to your host and the host of the recipient.

If content needs to be changed or deleted after being shared, your instance can request these changes, and this is often honored. That said, once something is shared to the network, it may be difficult to “undo.”

Threads

All user content is entrusted to one host, in this case Meta, with a privacy policy similar to Instagram. Meta determines when information is shared with law enforcement, how it is used for advertising, how well protected it is from a breach, and so on.

Sharing with instances works differently for Threads, as Meta has more restricted interoperability. Currently, content sharing is one-way: Threads users can opt-in to sharing their content with the Fediverse, but won’t see likes or replies. By the end of this year, they will allow Threads users to follow accounts on Mastodon accounts.

Federation on Threads may always be restricted, and features like transferring one's account to Mastodon may never be supported. Limits in sharing should not be confused with enhanced privacy or security, however. Public posts are just that—public—and you are still trusting your host (Meta) with private data like DMs (currently handled by Instagram). Instead these restrictions, should they persist, should be seen as the minimum level of control over users Meta deems necessary.

Bluesky

Bluesky, in contrast, is a very “loud” system. Every public message, interaction, follow and block is hosted by your PDS and freely shared to everyone in the network. Every public post is for everyone and is only discovered according to their own app and filter preferences. There are ways to algorithmically imitate smaller spaces with filtering and algorithmic feeds, such as with the Blacksky project, but these are open to everyone and your posts will not be restricted to that curated space.

Direct messages are limited to the flagship Bluesky app, and can be accessed by the Bluesky moderation team. The project plans to eventually incorporate DMs into the protocol, and include end-to-end-encryption, but it is not currently supported. Deletion on Bluesky is simply handled by removing the content from your PDS, but once a message is shared to Relay and Appview services it may remain in circulation a while longer according to their retention settings.

Moderation

Mastodon

Mastodon’s approach to moderation is often compared to subreddits, where the administrators of an instance are responsible for creating a set of rules and empowering a team of moderators to keep the community healthy. The result is a lot more variety in moderation experience, with the only boundary being an instance’s reputation in the broader Fediverse. Instances coordinating and “defederating” from problematic hosts has already been effective in the Fediverse. One former instance, Gab, was successfully cut off from the Fediverse for hosting extreme right-wing hate. The threat of defederation sets a baseline of behavior across the Fediverse, and from there users can choose instances based on reputation and on how aligned the hosts are with their own moderation preferences.

At its best, instances prioritize things other than growth. New members are welcomed and onboarded carefully as new community members, and hosts only grow the community if their moderation team can support it. Some instances even set a permanent cap on participation to a few thousand to ensure a quality and intimate experience. Current members too can vote with their feet, and if needed split off into their own new instance without needing to disconnect entirely.

While Mastodon has a lot going for it by giving users a choiceavoiding automation, and avoiding unsustainable growth, there are other evergreen moderation issues at play. Decisions can be arbitrary, inconsistent, and come with little recourse. These aren't just decisions impacting individual users, but also those affecting large swaths of them, when it comes to defederation. 

Threads

Threads, as alluded to when discussing privacy above, aims for a moderation approach more aligned with pre-2022 Twitter and Meta’s other current platforms like Instagram. That is, an impossible task of scaling moderation with endless growth of users.

As the largest of these services however, this puts Meta in a position to set norms around moderation as it enters the Fediverse. A challenge for decentralized projects will be to ensure Meta’s size doesn’t make them the ultimate authority on moderation decisions, a pattern of re-centralization we’ve seen happen in email. Spam detection tools have created an environment where email, though an open standard, is in practice dominated by Microsoft and Google as smaller services are frequently marked as spammers. A similar dynamic could play out with the federated social web, where Meta has capacity to exclude smaller instances with little recourse. Other instances may copy these decisions or fear not to do so, lest they are also excluded. 

Bluesky

While in beta, Bluesky received a lot of praise and criticism for its moderation. However, up until recently, all moderation was handled by the centralized Bluesky company—not throughout the distributed AT network. The true nature of moderation structure on the network is only now being tested.

AT Protocol relies on labeling services, aka “labelers”  for moderation. These special accounts using Bluesky’s Ozone tool labels posts with small pieces of metadata. You can also filter accounts with account block lists published by other users, a lot like the Block Together tool formerly available on Twitter. Your Appview aggregating your feed uses these labels to and block lists to filter content. Arbitrary and irreconcilable moderation decisions are still a problem, as are some of the risks of using automated moderation, but it is less impactful as users are not deplatformed and remain accessible to people with different moderation settings. This also means problematic users don’t go anywhere and can still follow you, they are just less visible.

The AT network is censorship resistant, and conversely, it is difficult to meaningfully ban users. To be propagated in the network one only needs a PDS to host their account, and at least one Relay to spread that information. Currently Relays sit out of moderation, only scanning to restrict CSAM. In theory Relays could be more like a Fediverse instance and more accurately curate and moderate users. Even then, as long as one Relay carries the user they will be part of the network. PDSes, much like web hosts, may also choose to remove controversial users, but even in those cases PDSes are easy to self-host even on a low-power computer.

Like the internet generally, removing content relies on the fragility of those targeted. With enough resources and support, a voice will remain online. Without user-driven approaches to limit or deplatform content (like defederation), Bluesky services may be targeted by censorship on the infrastructure level, like on the ISP level.

Hosting and Censorship

With any internet service, there are some legal obligations when hosting user generated content. No matter the size, hosts may need to contend with DMCA takedowns, warrants for user data, cyber attacks,  blocking from authoritarian regimes, and other pressures from powerful interests. This decentralized approach to social media also relies on a shared legal protection for all hosts, Section 230.  By ensuring they are not held liable for user-generated content, this law provides the legal protection necessary for these platforms to operate and innovate.

Given the differences in the size of hosts and their approach to moderation, it isn’t surprising that each of these platforms will address platform liability and censorship differently.

Mastodon

Instance hosts, even for small communities, need to navigate these legal considerations as we outlined in our Fediverse legal primer. We have already seen some old patterns reemerge with these smaller, and often hobbyist, hosts struggling to defend themselves from legal challenges and security threats. While larger hosts have resources to defend against these threats, an advantage of the decentralized model is censors need to play whack-a-mole in a large network where messages flow freely across the globe. Together, the Fediverse is set up to be quite good at keeping information safe from censorship, but individual users and accounts are very susceptible to targeted censorship efforts and will struggle with rebuilding their presence.

Threads

Threads is the easiest to address, as Meta is already several platforms deep into addressing liability and speech concerns, and have the resources to do so. Unlike Mastodon or Bluesky, they also need to do so on a much larger scale with a larger target on their back as the biggest platform backed by a multi-billion dollar company. The unique challenge for Threads however will be how Meta decides to handle content from the rest of the Fediverse. Threads users will also need to navigate the perks and pitfalls of sticking with a major host with a spotty track record on censorship and disinformation.

Bluesky

Bluesky is not yet tested beyond the flagship Bluesky services, and raises a lot more questions. PDSes, Relays and even Appviews play some role in hosting, and can be used with some redundancies. For example your account on one PDS may be targeted, but the system is designed to be easy for users to change this host, self-host, or have multiple hosts while retaining one identity on the network.

Relays, in contrast, are more computationally demanding and may remain the most “centralized” service as natural monopolies— users have some incentive to mostly follow the biggest relays. The result is a potential bottle-neck susceptible to influence and censorship. However, if we see a wide variety of relays with different incentives, it becomes more likely that messages can be shared throughout the network despite censorship attempts.

You Might Not Have to Choose

With this overview, you can start diving into one of these new Twitter alternatives leading the way in a more free social web. Thanks to the open nature of these new systems, where you set up will become less important with improved interoperability.

Both ActivityPub and AT Protocol developers are receptive to making the two better at communicating with one another, and independent projects like  Bridgy Fed, SkyBridge, RSS Parrot and Mastofeed are already letting users get the best of both worlds. Today a growing number of projects speak both protocols, along with older ones like RSS. It may be these paths towards a decentralized web become increasingly trivial as they converge, despite some early growing pains. Or the two may be eclipsed by yet another option. But their shared trajectory is moving us towards a more free, more open and refreshingly weird social web free of platform gatekeepers.

Ah, Steamboat Willie. It’s been too long. 🐭

Par : Aaron Jue
18 juin 2024 à 11:31

Did you know Disney’s Steamboat Willie entered the public domain this year? Since its 1928 debut, U.S. Congress has made multiple changes to copyright law, extending Disney’s ownership of this cultural icon for almost a century. A century.

Creativity should spark more creativity.

That’s not how intellectual property laws are supposed to work. In the United States, these laws were designed to give creators a financial incentive to contribute to science and culture. Then eventually the law makes this expression free for everyone to enjoy and build upon. Disney itself has reaped the abundant benefits of works in the public domain including Hans Christian Andersen’s “The Little Mermaid" and "The Snow Queen." Creativity should spark more creativity.

In that spirit, EFF presents to you this year’s EFF member t-shirt simply called “Fix Copyright":

Copyright Creativity is fun for the whole family.

The design references Steamboat Willie, but also tractor owners’ ongoing battle to repair their equipment despite threats from manufacturers like John Deere. These legal maneuvers are based on Section 1201 of the Digital Millennium Copyright Act or DMCA. In a recent appeals court brief, EFF and co-counsel Wilson Sonsini Goodrich & Rosati argued that Section 1201 chills free expression, impedes scientific research, and to top it off, is unenforceable because it’s too broad and violates the First Amendment. Ownership ain’t what it used to be, so let’s make it better.

We need you! Get behind this mission and support EFF's work as a member. Through EFF's 34th anniversary on July 10:

You can help cut through the BS and make the world a little brighter—whether online or off.

Join EFF

Defend Creativity & Innovation Online

_________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Hand me the flashlight. I’ll be right back...

Par : M. Jackalope
13 juin 2024 à 03:21

It’s time for the second installment of campfire tales from our friends, The Encryptids—the rarely-seen enigmas who’ve become folk legends. They’re helping us celebrate EFF’s summer membership drive for internet freedom!

Through EFF's 34th birthday on July 10, you can receive 2 rare gifts, be a member for just $20, and as a bonus new recurring monthly or annual donations get a free match! Join us today.

So...do you ever feel like tech companies still own the devices you’ve paid for? Like you don’t have alternatives to corporate choices? Au contraire! Today, Monsieur Jackalope tells us why interoperability plays a key role in giving you freedom in tech...

-Aaron Jue
EFF Membership Team

_______________________________________

Jackalope in a forest saying "Interoperability makes good things great!"C

all me Jacques. Some believe I am cuddly. Others deem me ferocious. Yet I am those things and more. How could anyone tell me what I may be? Beauty lives in creativity, innovation, and yes, even contradiction. When you are confined to what is, you lose sight of what could be. Zut! Here we find ourselves at the mercy of oppressive tech companies who perhaps believe you are better off without choices. But they are wrong.

Control, commerce, and lack of competition. These limit us and rob us of our potential. We are destined for so much more in tech! When I must make repairs on my scooter, do I call Vespa for their approval on my wrenches? Mais non! Then why should we prohibit software tools from interacting with one another? The connected world must not be a darker reflection of this one we already know.

The connected world must not be a darker reflection of this one we already know.

EFF’s team—avec mon ami Cory Doctorow!—advocate powerfully for systems in which we do not need the permission of companies to fix, connect, or play with technology. Oui, c’est difficile: you find copyrighted software in nearly everything, and sparkling proprietary tech lures you toward crystal prisons. But EFF has helped make excellent progress with laws supporting your Right to Repair, they speak out against tech monopolies, they lift up the free and open source software community, and they advocate for creators across the web.

Join EFF

Interoperability makes good things great

You can make a difference in the fight to truly own your devices. Support the EFF’s efforts as a member this year and reach toward the sublime web that interconnection and creativity can bring.

Cordialement,

Monsieur Jackalope

_______________________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Wanna Make Big Tech Monopolies Even Worse? Kill Section 230

It’s no fun when your friends ask you to take sides in their disputes. The plans for every dinner party, wedding, and even funeral arrive at a juncture where you find yourself thinking, “Dang, if I invite her, then he won’t come.”

It’s even less fun when you’re running an online community, from a groupchat to a Mastodon server (or someday, a Bluesky server), or any other (increasingly cheap and easy) space where your friends (and their friends) can hang out online, far from the unquenchable dumpster-fires of Big Tech social media.

But there’s a circle of hell that’s infinitely worse than being asked to choose sides in a flamewar: being threatened with a lawsuit for refusing to do so (or even for complying with one side’s request over the other).

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

At EFF, we’ve had decades of direct experience with the, uh, heated rhetoric that attends online disputes (there’s a reason the most famous law about online arguments was coined by the very first person EFF ever hired).

That’s one of the reasons we’re such big fans of Section 230 (47 U.S.C. § 230), a much-maligned, badly misunderstood law that protects people who run online services from being dragged into legal disputes between their users.

Getting sued can profoundly disrupt your life, even if you win. Much of the time, people on the receiving end of legal threats are forced to settle because they can’t afford to defend themselves in court. There's a whole cottage industry of legal bullies who’ll help the thin-skinned, vindictive and deep-pocketed to silence their critics.

That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.

Look, running online communities is already a thankless task that can convert a generous digital host into a bitter ex-online host.

The alternatives to Big Tech come from individuals, co-ops, nonprofits and startups. These cannot exist in a world where we change the law to make people who offer a space where communities may gather vulnerable to being dragged into lawsuits between their community members.

It’s one thing to volunteer your time and resources to create a hospitable place online; it’s another thing entirely to assume an uninsurable risk that could jeopardize your life’s savings, your home, and your retirement fund. Defending against a single such case can cost hundreds of thousands of dollars.

That’s very bad news indeed, because a world without Section 230 will desperately need alternatives to Big Tech.

Big Tech has deep pockets, which means that even if it creates a system of hair-trigger moderation that takes down anything remotely controversial on sight, it will still attract a staggering number of legal threats.

There’s a useful analogy here to FTX, the disgraced, fraudulent cryptocurrency exchange. Like Big Tech, FTX has some genuinely aggrieved users, but FTX has also been targeted by opportunistic treasure hunters who have laid claims against the company totaling 23.6 quintillion dollars.

We know what Big Tech will do in a post-230 world, because some of us are already living in that world. Donald Trump signed SESTA-FOSTA into law in 2018. The law was billed as a narrowly targeted measure to make platforms liable for failing to intervene in cases where they were aware of human trafficking. In practice, the law has been used to indiscriminately target consensual sex work, placing sex workers in harm’s way (just as we predicted).

Without Section 230, Big Tech will shoot first, ask questions later when it comes to taking down controversial online speech (like #MeToo or Black Lives Matter). For marginalized users with little social power (again, like #MeToo or Black Lives Matter participants), Big Tech takedowns will be permanent, because Big Tech has no incentive to figure out whether it’s worth hosting their speech.

Meanwhile, for the wealthy and powerful, a post-230 world is one where dictators, war criminals, and fraudsters will have a new, powerful tool to silence their critics.

A post-230 world, in other words, is a world where Big Tech is infinitely worse for the users who already suffer most from the large platforms’ moderation failures.

But it’s also a world where it’s infinitely harder to start an alternative to Big Tech’s gigantic walled gardens.

No wonder tech billionaires support getting rid of Section 230: they understand that their overgrown, universally loathed services are vulnerable to real alternatives.

Four years ago, the Biden Administration declared that promoting competition was a whole-of-government priority (and we cheered). Getting rid of Section 230 will do the opposite: freeze the internet in its current, monopolized state, creating a world where the rule of today’s tech barons is never challenged by a more democratic, user-centric internet.

Take Action

Ending Section 230 Will Make Big Tech Monopolies Even Worse

❌
❌