Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Wanna Make Big Tech Monopolies Even Worse? Kill Section 230

It’s no fun when your friends ask you to take sides in their disputes. The plans for every dinner party, wedding, and even funeral arrive at a juncture where you find yourself thinking, “Dang, if I invite her, then he won’t come.”

It’s even less fun when you’re running an online community, from a groupchat to a Mastodon server (or someday, a Bluesky server), or any other (increasingly cheap and easy) space where your friends (and their friends) can hang out online, far from the unquenchable dumpster-fires of Big Tech social media.

But there’s a circle of hell that’s infinitely worse than being asked to choose sides in a flamewar: being threatened with a lawsuit for refusing to do so (or even for complying with one side’s request over the other).

At EFF, we’ve had decades of direct experience with the, uh, heated rhetoric that attends online disputes (there’s a reason the most famous law about online arguments was coined by the very first person EFF ever hired).

That’s one of the reasons we’re such big fans of Section 230 (47 U.S.C. § 230), a much-maligned, badly misunderstood law that protects people who run online services from being dragged into legal disputes between their users.

Getting sued can profoundly disrupt your life, even if you win. Much of the time, people on the receiving end of legal threats are forced to settle because they can’t afford to defend themselves in court. There's a whole cottage industry of legal bullies who’ll help the thin-skinned, vindictive and deep-pocketed to silence their critics.

That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.

Look, running online communities is already a thankless task that can convert a generous digital host into a bitter ex-online host.

The alternatives to Big Tech come from individuals, co-ops, nonprofits and startups. These cannot exist in a world where we change the law to make people who offer a space where communities may gather vulnerable to being dragged into lawsuits between their community members.

It’s one thing to volunteer your time and resources to create a hospitable place online; it’s another thing entirely to assume an uninsurable risk that could jeopardize your life’s savings, your home, and your retirement fund. Defending against a single such case can cost hundreds of thousands of dollars.

That’s very bad news indeed, because a world without Section 230 will desperately need alternatives to Big Tech.

Big Tech has deep pockets, which means that even if it creates a system of hair-trigger moderation that takes down anything remotely controversial on sight, it will still attract a staggering number of legal threats.

There’s a useful analogy here to FTX, the disgraced, fraudulent cryptocurrency exchange. Like Big Tech, FTX has some genuinely aggrieved users, but FTX has also been targeted by opportunistic treasure hunters who have laid claims against the company totaling 23.6 quintillion dollars.

We know what Big Tech will do in a post-230 world, because some of us are already living in that world. Donald Trump signed SESTA-FOSTA into law in 2018. The law was billed as a narrowly targeted measure to make platforms liable for failing to intervene in cases where they were aware of human trafficking. In practice, the law has been used to indiscriminately target consensual sex work, placing sex workers in harm’s way (just as we predicted).

Without Section 230, Big Tech will shoot first, ask questions later when it comes to taking down controversial online speech (like #MeToo or Black Lives Matter). For marginalized users with little social power (again, like #MeToo or Black Lives Matter participants), Big Tech takedowns will be permanent, because Big Tech has no incentive to figure out whether it’s worth hosting their speech.

Meanwhile, for the wealthy and powerful, a post-230 world is one where dictators, war criminals, and fraudsters will have a new, powerful tool to silence their critics.

A post-230 world, in other words, is a world where Big Tech is infinitely worse for the users who already suffer most from the large platforms’ moderation failures.

But it’s also a world where it’s infinitely harder to start an alternative to Big Tech’s gigantic walled gardens.

No wonder tech billionaires support getting rid of Section 230: they understand that their overgrown, universally loathed services are vulnerable to real alternatives.

Four years ago, the Biden Administration declared that promoting competition was a whole-of-government priority (and we cheered). Getting rid of Section 230 will do the opposite: freeze the internet in its current, monopolized state, creating a world where the rule of today’s tech barons is never challenged by a more democratic, user-centric internet.

NETMundial+10 Multistakeholder Statement Pushes for Greater Inclusiveness in Internet Governance Processes

A new statement about strengthening internet governance processes emerged from the NETMundial +10 meeting in Brazil last month, strongly reaffirming the value of and need for a multistakeholder approach involving full and balanced participation of all parties affected by the internet—from users, governments, and private companies to civil society, technologists, and academics.

But the statement did more than reiterate commitments to more inclusive and fair governance processes. It offered recommendations and guidelines that, if implemented, can strengthen multistakeholder principles as the basis for global consensus-building and democratic governance, including in existing multilateral internet policymaking efforts.


The event and statement, to which EFF contributed with dialogue and recommendations, is a follow-up to the 2014 NETMundial meeting, which ambitiously sought to consolidate multistakeholder processes to internet governance and recommended
10 process principles. It’s fair to say that over the last decade, it’s been an uphill battle turning words into action.

Achieving truly fair and inclusive multistakeholder processes for internet governance and digital policy continues to face many hurdles.  Governments, intergovernmental organizations, international standards bodies, and large companies have continued to wield their resources and power. Civil society
  organizations, user groups, and vulnerable communities are too often sidelined or permitted only token participation.

Governments often tout multistakeholder participation, but in practice, it is a complex task to achieve. The current Ad Hoc Committee negotiations of the proposed
UN Cybercrime Treaty highlight the complexity and controversy of multistakeholder efforts. Although the treaty negotiation process was open to civil society and other nongovernmental organizations (NGOs), with positive steps like tracking changes to amendments, most real negotiations occur informally, excluding NGOs, behind closed doors.

This reality presents a stark contrast and practical challenge for truly inclusive multistakeholder participation, as the most important decisions are made without full transparency and broad input. This demonstrates that, despite the appearance of inclusivity, substantive negotiations are not open to all stakeholders.

Consensus building is another important multistakeholder goal but faces significant practical challenges because of the human rights divide among states in multilateral processes. For example, in the context of the Ad Hoc Committee, achieving consensus has remained largely unattainable because of stark differences in human rights standards among member States. Mechanisms for resolving conflicts and enabling decision-making should consider human rights laws to indicate redlines. In the UN Cybercrime Treaty negotiations, reaching consensus could potentially lead to a race to the bottom in human rights and privacy protections.

To be sure, seats at the policymaking table must be open to all to ensure fair representation. Multi-stakeholder participation in multilateral processes allows, for example, civil society to advocate for more human rights-compliant outcomes. But while inclusivity and legitimacy are essential, they alone do not validate the outcomes. An open policy process should always be assessed against the specific issue it addresses, as not all issues require global regulation or can be properly addressed in a specific policy or governance venue.

The
NETmundial+10 Multistakeholder Statement, released April 30 following a two-day gathering in São Paulo of 400 registered participants from 60 countries, addresses issues that have prevented stakeholders, especially the less powerful, from meaningful participation, and puts forth guidelines aimed at making internet governance processes more inclusive and accessible to diverse organizations and participants from diverse regions.

For example, the 18-page statement contains recommendations on how to strengthen inclusive and diverse participation in multilateral processes, which includes State-level policy making and international treaty negotiations. Such guidelines can benefit civil society participation in, for example, the UN Cybercrime Treaty negotiations. EFF’s work with international allies in the UN negotiating process is outlined here.

The NETmundial statement takes asymmetries of power head on, recommending that governance processes provide stakeholders with information and resources and offer capacity-building to make these processes more accessible to those from developing countries and underrepresented communities. It sets more concrete guidelines and process steps for multistakeholder collaboration, consensus-building, and decision-making, which can serve as a roadmap in the internet governance sphere.

The statement also recommends strengthening the UN-convened Internet Governance Forum (IGF), a predominant venue for the frank exchange of ideas and multistakeholder discussions about internet policy issues. The multitude of initiatives and pacts around the world dealing with internet policy can cause duplication, conflicting outcomes, and incompatible guidelines, making it hard for stakeholders, especially those from the Global South, to find their place. 


The IGF could strengthen its coordination and information sharing role and serve as a venue for follow up of multilateral digital policy agreements. The statement also recommended improvements in the dialogue and coordination between global, regional, and national IGFs to establish continuity between them and bring global attention to local perspectives.

We were encouraged to see the statement recommend that IGF’s process for selecting its host country be transparent and inclusive and take into account human rights practices to create equitable conditions for attendance.

EFF and 45 digital and human rights organizations last year called on the UN Secretary-General and other decision-makers to reverse their decision to grant host status for the 2024 IGF to Saudi Arabia, which has a long history of human rights violations, including the persecution of human and women’s rights defenders, journalists, and online activists. Saudi Arabia’s draconian cybercrime laws are a threat to the safety of civil society members who might consider attending an event there.  

Nominations Open for 2024 EFF Awards!

Nominations are now open for the 2024 EFF Awards! The nomination window will be open until May 31st at 2:00 PM Pacific time. You could nominate the next winner today!

For over thirty years, the Electronic Frontier Foundation presented awards to key leaders and organizations in the fight for freedom and innovation online. The EFF Awards celebrate the longtime stalwarts working on behalf of technology users, both in the public eye and behind the scenes. Past Honorees include visionary activist Aaron Swartz, human rights and security researchers The Citizen Lab, media activist Malkia Devich-Cyril, cyberpunk author William Gibson, and whistle-blower Chelsea Manning.

The internet is a necessity in modern life and a continually evolving tool for communication, creativity, and human potential. Together we carry—and must always steward—the movement to protect civil liberties and human rights online. Will you help us spotlight some of the latest and most impactful work towards a better digital future?

Remember, nominations close on May 31st at 2:00 PM Pacific time!

GO TO NOMINATION PAGE

Nominate your favorite digital rights Heroes now!

After you nominate your favorite contenders, we hope you will consider joining us on September 12 to celebrate the work of the 2024 winners. If you have any questions or if you'd like to receive updates about the event, please email events@eff.org.

The EFF Awards depend on the generous support of individuals and companies with passion for digital civil liberties. To learn about how you can sponsor the EFF Awards, please email tierney@eff.org

 

EFF Urges Supreme Court to Reject Texas’ Speech-Chilling Age Verification Law

A Texas age verification law will rob people of anonymity online, chill access to speech for privacy- and security-minded internet users, and entirely block some adults from accessing constitutionally protected online content, EFF argued in a brief filed with the Supreme Court last week.

EFF joined the Woodhull Freedom Foundation in filing a friend-of-the-court brief urging the U.S. Supreme Court to grant review of—and ultimately overturn—the Fifth Circuit’s decision upholding the Texas law.

Last year, the state of Texas passed HB 1181 in a misguided attempt to shield minors from certain online content. The law requires all Texas internet users, including adults, to complete invasive “age verification” procedures on every website the state deems to be at least one-third composed of sexual material. Under the law, adult users must upload sensitive personal records—such as a driver’s license or other photo ID—to access any content on these sites, including non-explicit content. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect.

The Fifth Circuit’s decision disregards important constitutional principles. The First Amendment protects our right to access protected online speech without substantial government interference. For adults, this is true even if that speech constitutes sexual or explicit content. The government cannot burden adult internet users and force them to sacrifice their anonymity, privacy, and security simply to access lawful speech.

EFF’s position is hardly unique. Courts have repeatedly and consistently held similar age verification laws to be unconstitutional due to these and other harms. As EFF noted in its brief, the Fifth Circuit’s decision is an anomaly and has created a split among federal circuit courts. 

In coming to its decision, the Fifth Circuit relied largely on a single Supreme Court case from 1968, involving a law that required an in-person ID check to buy magazines featuring adult content. But online age verification is nothing like flashing an ID card in person to buy a particular physical item.

For one, HB 1181 blocks access to entire websites, not just individual offending magazines. This could include many common, general-purpose websites, so long as only one-third of the content is conceivably adult content. “HB 1181’s requirements are akin to requiring ID every time a user logs into a streaming service like Netflix, regardless of whether they want to watch a G- or R-rated movie,” EFF wrote.

Second, and unlike with in-person age-gates, the only viable way for a website to comply with HB 1181 is to require all users to upload and submit, not just momentarily display, a data-rich government-issued ID or other document with personal identifying information. In its brief, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns.

For example, HB 1181 may permit the Texas government to log and track user access when verification is done via government-issued ID. As the trial court explained, the law “runs the risk that the state can monitor when an adult views sexually explicit materials” and threatens to force individuals “to divulge specific details of their sexuality to the state government to gain access to certain speech.”

Additionally, a person who submits identifying information online can never be sure if websites will keep that information or how that information might be used or disclosed. EFF noted that HB 1181 does not require all parties who may have access to the data—such as third-party intermediaries, data brokers, or advertisers—to delete that data. This leaves users highly vulnerable to data breaches and other security harms.

Finally, EFF explained that millions of adult internet users would be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID.

There are less restrictive alternatives to mass online age-gating that would still protect minors without substantially burdening adults. The trial court, in fact, outlined several of these alternatives in its decision, based on the factual evidence presented by the parties. The Fifth Circuit completely ignored these findings.

EFF has been a steadfast critic of efforts to censor the internet and burden access to online speech. We hope the Supreme Court agrees to hear this appeal and reverses the decision of the Fifth Circuit.

Speaking Freely: Ethan Zuckerman

Ethan Zuckerman is a professor at the University of Massachusetts at Amherst, where he teaches Public Policy, Communication and Information. He is starting a new research center called the Institute for Digital Public Infrastructure. Over the years, he’s been a tech startup guy (with Tripod.com), a non-profit founder (Geekcorps.org) and co-founder (Globalvoices.org), and throughout it all, a blogger.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean to you? 

It is such a complicated question. It sounds really easy, and then it gets really complicated really quickly. I think freedom of expression is this idea that we want to hear what people think and feel and believe, and we want them to say those things as freely as possible. But we also recognize at the same time that what one person says has a real effect on what other people are able to say or feel comfortable saying. So there’s a naive version of freedom of expression which sort of says, “I’m going to say whatever I want all the time.” And it doesn’t do a good job of recognizing that we are in community. And that the ways in which I say things may make it possible or not possible for other people to say things. 

So I would say that freedom of expression is one of these things that, on the surface, looks super simple. You want to create spaces for people to say what they want to say and speak their truths no matter how uncomfortable they are. But then you go one level further than that and you start realizing, oh, okay, what I’m going to do is create spaces that are possible for some people to speak and not for other people to speak. And then you start thinking about how you create a multiplicity of spaces and how those spaces interact with one another. So it’s one of these fractally complicated questions. The first cut at it is super simple. And then once you get a little bit into it it gets incredibly complicated. 

York: Let’s dig into that complexity a bit. You and I have known each other since about 2008, and the online atmosphere has changed dramatically in that time. When we were both, I would say, pretty excited about how the internet was able to bring people together across borders, across affinities, etc. What are some of the changes you’ve seen and how do you think we can preserve a sense of free expression online while also countering some of these downsides or harms? 

Let’s start with the context you and I met in. You and I both were very involved in early years with Global Voices. I’m one of the co-founders along with Rebecca MacKinnon and a whole crew of remarkable people who started this online community as a way of trying to amplify voices that we don’t hear from very often. A lot of my career on the internet has been about trying to figure out whether we can use technology to help amplify voices of people in parts of the world where most of us haven’t traveled, places that we seldom hear from, places that don’t always get attention in the news and such. So Rebecca and I, at the beginning of the 2000s, got really interested in ways that people were using blogs and new forms of technology to report on what was going on. And for me it was places like Sub-Saharan Africa. Rebecca was interested in places like North Korea and sort of getting a picture of what was going on in some of those places, through the lens, often, of Chinese business people who were traveling to those places. 

And we started meeting bloggers who were writing from Iraq, which was under US attack at that point. Who were writing from countries like Madagascar, which had a lot going on politically, but almost no one knew about it or was hearing about it. So you and I started working in this context of, can we amplify these voices? Can we help people speak freely and have an audience? Because that’s one of these interesting problems— you can speak freely if you’re anonymous and on an onion site, etc, but no one’s going to hear you. So can we help people not just speak freely, but can we help find an audience associated with it? And some of the work that I was doing when you and I first met was around things like anonymous blogging with wordpress and Tor. And literally building guides to help people who are whistleblowers in closed societies speak online. 

You and I were also involved with the Berkman Center at Harvard, and we were both working on questions of censorship. One of the things that’s so interesting for me—to sort of go back in history—is to think about how censorship has changed online. Who those opponents to speech are. We started with the assumption that it was going to be the government of Saudi Arabia, or the government of Tunisia, or the government of China, who was going to block certain types of speech at the national level. You know, “You can’t say this. You’re going to be taken down, or, at worst, arrested for saying this.” We then pivoted, to a certain extent, to worries about censorship by companies, by platforms. And you did enormous amounts of work on this! You were at war with Facebook, now Meta, over their work on the female-presenting nipple. Now looking at the different ways which companies might decide that something was allowable speech or unallowable speech based on standards that had nothing to do with what their users thought, but really what the platforms’ decisions were. 

Somewhere in the late 20-teens, I think the battlefield shifted a little bit. And I think there are still countries censoring the internet, there are still platforms censoring the internet, but we got much better at censorship by each other. And, for me, this begins in a serious way with Gamergate. Where you have people—women, critics of the gaming industry—talking about feminist counter-narratives in video games. And the reaction from certain members of an online community is so hostile and so abusive, there’s so much violent misogyny named at people like Anita Sarkeesian and sort of other leaders in this field, that it’s another form of silencing speech. Basically the consequences for some people speaking are now so high, like the amount of abuse you’re going to suffer, whether it’s swatting, whether it’s people releasing a videogame to beat you up—and that’s what happened to Anita—it doesn’t silence you in the same way that, like, the Great Firewall or having your blog taken down might silence you. But the consequences for speech get so high that they really shift and change the speech environment. And part of what’s so tricky about this is some of the people who are using speech to silence speech talk about their right to free speech and how free speech protects their ability to do this. And in some sense, they’re right. In another sense, they’re very wrong. They’re using speech to raise the consequences for other people’s speech and make it incredibly difficult for certain types of speech to take place. 

So I feel like we’ve gone from these very easy enemies—it’s very easy to be pissed off at the Saudis or the Chinese, it’s really satisfying to be pissed off at Facebook or any of the other platforms. But once we start getting to the point where we’re sort of like, hey, your understanding of free speech is creating an environment where it’s very hard or it’s very dangerous for others to speak, that’s where it gets super complicated. And so I would say I’ve gone from a firm supporter of free speech online, to this sort of complicated multilayered, “Wow, there’s a lot to think about in this” that I sort of gave you based on your opening question. 

York: Let’s unpack that a bit, because it’s complicated for me as well. I mean, over the years my views have also shifted. But right now we are seeing an uptick in attempts to censor legitimate speech from the various bills that we’re seeing across the African continent against LGBTQ+ speech, Saudi Arabia is always an evergreen example, Sudan just shut down the internet again, Israel shut down the internet in Palestine, Iran still has some sort of ongoing shutdown, etc etc, I mean name a country and there’s probably something ongoing. And, of course, including the US with the Kids Online Safety Act (KOSA), which will absolutely have a negative impact on free expression for a lot of people. And of course we’re also seeing abortion-related speech being chilled in the US. So, with all of those examples, how do we separate the questions of how we deal with this idea of crowding or censoring eachother’s speech with the very real, persistent threats to speech that we’re seeing? 

I think it is totally worthwhile to mention that actors in this situation have different levels of power. So when you look at something like the Kids Online Safety Act (KOSA), which has the real danger of essentially leaving what is prohibited speech up to individual state attorneys general. And we are seeing different American state attorneys general essentially say we are going to use this to combat “transgenderism,” we’re going to use this to combat—what they see as—the “LGBTQ agenda”, but a lot of the rest of us see as humanity and people having the ability to express their authentic selves. When you have a state essentially saying, “We’re going to censor content accessible to people under 18,” first of all, I don’t think it will pass Supreme Court muster. I think even under the crazy US Supreme Court at the moment, that’s actually going to get challenged successfully. 

When I talk about this progression from state censorship to platform censorship to individual censorship, there is a decreasing amount of power. States have guns, they can arrest you. There’s a lot of things Facebook can do to you, but they can’t, at this point, arrest you. They do have enormous power in terms of large swaths of the online environment, and we need to hold that sort of power accountable as well. But these things have to be an “and”, not an “or.” 

And, at the same time, as we are deeply concerned about state power and we’re deeply concerned about platform power, we also have to recognize that changes to a speech environment can make it incredibly difficult for people to participate or not participate. So one of the examples of this, in many ways, is changes to Twitter under Elon Musk. Where technical changes as well as moderation changes have made this a less safe space for a lot of people. And under the heading of free speech, you now have an environment where it is a whole lot easier to be harassed and intimidated to the point where it may not be easy to be on the platform anymore. Particularly if you are, say, a Muslim woman coming from India, for instance. This is a subject that I’m spending a lot of time with my friend and student Ifat Gazia looking at, how Hindutva is sort of using Twitter to gang up on Kashmirian women and create circumstances where it’s incredibly unsafe and unpleasant for them to be speaking where anything they say will turn into misogynistic trolling as well as attempts to get them kicked off the platform. And so, what’s become a free speech environment for Hindu nationalism turns out to make that a much less safe environment for the position that Kashmir should be independent or that Muslims should be equal Indian citizens. And so, this then takes us to this point of saying we want either the State or the platform to help us create a level playing field, help us create a space in which people can speak. But then suddenly we have both the State and the platform coming in and saying, “you can say this, and not say this.” And that’s why it gets so complicated so fast. 

York: There are many challenges to anonymous speech happening around the world. One example that comes to mind is the UK’s Online Safety Act, which digs into it a bit. We also both have written about the importance of anonymity for protecting vulnerable communities online. Have your views on anonymity or pseudonymity changed over the years? 

One of the things that was so interesting about early blogging was that we started seeing whistleblowers. We started seeing people who had information from within governments finding ways to express what was going on, within their states and within their countries. And I think to a certain extent, kind of leading up to the rise of WikiLeaks, there was this sort of idea that anonymity was almost a mark of authenticity. If you had to be anonymous perhaps it was because you were really close to the truth. Many of us took leaks very seriously. We took this idea that this was a leak, this was the unofficial narrative, we should pay an enormous amount of attention to it. I think, like most things in a changing media environment, the notion of leaking and the notion of protected anonymity has gotten weaponized to a certain extent. I think, you know, Wikileaks is its own complicated narrative where things which were insider documents within, say, Kenya, early on in WikiLeak’s history, sort of turned into giant document dumps with the idea that there must be something in here somewhere that’s going to turn out to be important. And, often, there was something in there, and there was also a lot of chaff in there. I think people learned how to use leaking as a strategy. And now, anytime you want people to pay attention to a set of documents, you say, I’m going to go ahead and “leak” them. 

At the same time, we’ve also seen people weaponize anonymity. And a story that you and I are both profoundly familiar with is Gay Girl in Damascus. Where you had someone using anonymity to claim that she was a lesbian living in a conservative community and talking about her experiences there. But of course it turned out to be a middle aged male Scotsman who had taken on this identity in the hopes of being taken more seriously. Because, of course, everyone knows that middle aged white men never get a voice in online dialogues, he had to make himself into a queer, Syrian woman to have a voice in that dialogue. Of course, the real amusing part of that, and what we found out in unwinding that situation, was that he was in a relationship with another fake lesbian who was another dude pretending to be a lesbian to have a voice online. So there’s this way in which we went from this very sort of naive, “it’s anonymous, therefore it’s probably a very powerful source,” to, “it’s anonymous, it’s probably yet another troll.” 

I think the answer is anonymity is really complicated. Some people really do need anonymity. And it’s really important to construct ways in which people can speak freely. But anyone who has ever worked with whistleblowers—and I have—will tell you that finding a way to actually put your name to something gives it vastly more power. So I think anonymity remains important, we’ve got to find ways to defend and protect it. I think we’re starting to find that the sort of Mark Zuckerberg idea, “you get rid of anonymity and the web will be wonderful”, is complete crap. There’s many communities that end up being very healthy with persistent pseudonyms or even anonymity. It has more to do with the space and the norms associated with it. But anonymity is neither the one size fits all solution to making whistleblowing safe, nor is it the “oh no, if you let anonymity in your community will collapse.” Like everything in this space, it turns out to be complicated and nuanced. And both more and less important than we tend to think. 

York: Tell me about an early experience that shaped your views on free expression. 

The story of Hao Wu is the story I want to tell here. When I think about freedom of expression online, I find myself thinking a lot about his story. Hao Wu is a documentary filmmaker. At this point, a very accomplished documentary filmmaker. He has made some very successful films, including one called The People’s Republic of Desire about Chinese live-streaming, which has gotten a great deal of celebration. He has a new film out called 76 Days about the lockdown of Wuhan. But I got to know him very indirectly, and it was from the fact that he was making a film in China about the phenomenon of underground Christian churches. And he got arrested and held for five months, and we knew about him through the Global Voices community because he had been an active blogger. We’d been paying attention to some of the work he was doing and suddenly he’d gone silent. 

I ended up working with Rebecca MacKinnon, who speaks Chinese and was in touch with all the folk involved, and I was doing the websites and such, building a free Hao Wu blog. And using that, and sort of platforming his sister, as a chance to advocate for his release. And what was so fascinating about this was Rebecca and I spent months writing about and talking about what was going on, and encouraging his sister to speak out, but she—completely understandably—was terrified about the consequences for her own life and her own career and family. At a certain point she was willing to write online and speak out, but that experience of sort of realizing that something that feels very straightforward and easy from your perspective, miles and miles away from the political situation, like, here’s this young man who is a filmmaker and a blogger and clearly a smart, interesting person, he should be able to speak freely, of course we’re going to advocate for his release. And then talking to his family and seeing the genuine terror that his sister had, that her life could be entirely transformed, and transformed negatively, by advocating for something as simple as her brother’s release. 

It’s interesting, I think about our mutual friend Alaa Abd El-Fattah, who has spent most of his adult life in Egyptian prisons, getting detained again and again and again. His family, his former partner, and many of his friends have spent years and years and years advocating for him. This whole process of advocating for someone’s ability to speak, advocating for someone’s ability to take political action, advocating for someone’s ability to make art—the closer you get to the situation, the harder it gets. Because the closer you are to the situation, the more likely that the injustice that you’re advocating to have overturned, is one that you’re experiencing as well. And it’s really interesting. I think it makes it very easy to advocate from a distance, and often much harder to advocate when you’re much closer to a situation. I think any situations where we find ourselves yelling about something on the other side of the world, it’s a good moment to sort of check and ask, are the people who are yelling the people who are directly affected by this—are they not yelling because the danger is so high, are they not yelling because maybe we misunderstand and are advocating for something that seems right and seems obvious but is actually much more complicated than we might otherwise think? 

York: Your lab is advocating for what you call a pluraverse. So you recognize that all these major platforms are going to continue to exist, people are going to continue to use them, but as we’re seeing a multitude of mostly decentralized platforms crop up, how do we see the future of moderation on those platforms? 

It’s interesting, I spend a ton of my time these days going out and sort of advocating for a pluraverse vision of the internet. And a lot of my work is trying to both set up small internet communities with very specific foci associated with them and thinking about an architecture that allows for a very broad range of experiences. One thing I found in all this is that small platforms often have much more restrictive rules than you would expect, and often for the better. And I’ll give a very tangible example. 

I am a large person. I am, for the first time in a long time, south of 300 pounds. But for a long time I have been around between 290 and 310 for most of my adult life. And I started running about six months ago. I was inspired by a guy named Martinus Evans, who ran his first marathon at 380 pounds, and started a running club called the Slow AF Running Club, which has a very active online community and advocates for fitness and running at any size. And so I now log on to this group probably three or four times a week to log my runs, get encouragement, etc. I had to write an essay to join this community. I had to sign on to an incredible set of rules, including no weight talk, no weight loss talk, no body talk. All sorts of things. And you might say, I have freedom of speech! I have freedom of expression! Well, I’m choosing to set that aside so that I can be a member of this community and get support in particular ways. And in a pluraverse, if I want to talk about weight loss or bodies or something like that I can do it somewhere else! But to be a part of this extremely healthy online community that’s really helping me out a lot, I have to sort of agree and put certain things in a box. 

And this is what I end up referring to as “small rooms.” Small rooms have a purpose. They have a community. They might have a very tight set of speech regulations. And they’re great—for that specific conversation. They’re not good for broader conversations. If I want to advocate for body positivity. If I want to advocate for healthy at any weight, any number of other things, I’m going to need to step into a bigger room. I’m going to need to go to Twitter or Facebook or something like that. And there the rules are going to be very different. They’re going to be much broader. They’re going to encourage people to come back and say, “Shut up you fat fuck.” And that is in fact what happens when you encounter some of these things on a space like Reddit. So this world of small rooms and big rooms is a world in which you might find yourself advocating for very tight speech restrictions if the community chooses them on specific platforms. And you might be advocating for very broad open rules in the large rooms with the notion that there’s always going to be conflict and there’s a need for moderation. 

Here is one of the problems that always comes up in these spaces. What happens if the community wants to have really terrible rules? What if the community is KiwiFarms and the rules are we’re going to find trans people and we’re going to harass them, preferably to death? What if that tiny room is Stormfront and we’re going to party like it’s 1939? We’re going to go right back to going after white nationalism and Christian nationalism and anti-Jewish and anti-Muslim? And things get really tricky when the group wants to trade Child Sexual Abuse Material (CSAM), because they certainly do. Or they want to create un-permissioned nonconsensual sexual imagery? What if it’s a group that wants to make images of Taylor Swift doing lots of things that she has never done or certainly has not circulated photos of? 

So I’ve been trying to think about this architecturally. So I think the way that I want to handle this architecturally is to have the friendly neighborhood algorithm shop. And the friendly neighborhood algorithm shop lets you do two things. It lets you view social media on a client that you control through a set of algorithms that you care about. So if you want to go in and say, “I don’t want any politics today,” or “I want politics, but only highly-verified news,” or “frankly, today give me nothing but puppies.” I think you should have the ability to choose algorithms that are going to filter your media, and choose to use them that way. But I also think the friendly neighborhood algorithm shop needs to serve platforms. And I think some platforms may say, “Hey, we’re going to have this set of rules and we’re going to enforce them algorithmically, and here are the ones we’re going to enforce by hand.” And I think certain algorithms are probably going to become de rigeur. 

I think having a check for known CSAM is probably a bare minimum for running a responsible platform these days. And having these sorts of tools that Facebook and such have created to scan large sets of images for  known CSAM, making those tools available to even small platform operators is probably a very helpful thing to do. I don’t think you’re going to require someone to do this for a Mastodon node, but I think it’s going to be harder and harder to run a Mastodon node if you don’t have some of those basic protections in place. Now this gets real hard really quickly. It gets real hard because we know that some other databases out there—including databases of extremist and terrorist content—are not reviewable. We are concerned that those databases may be blocking content that is legitimate political expression, and we need to figure out ways to be able to audit these and make sure that they’re used correctly. We also, around CSAM specifically, are starting to experience a wave of people generating novel CSAM that may not actually involve an actual child, but are recombinations of images to create new scenarios. I’ve got be honest with you, I don’t know what we’re going to do there. I don’t know how we anticipate it and block it, I don’t even know the legal status of blocking some of that imagery where there is not an actual child harmed. 

So these aren’t complete solutions. But I think getting to the point where we’re running a lot of different communities, we have an algorithmic toolkit that’s available to try to do some of that moderation that we want around the community, and there is an expectation that you’re doing that work. And if you’re not, it may be harder and harder to keep that community up and running and have people interact and interoperate with you. I think that’s where I find myself doing a lot of thinking and a lot of advocacy these days. 

We did a piece a few months ago called “The Three Legged Stool,” which is our manifesto for how to do a pluraverse internet and also have moderation and governability. It’s this sort of idea that you want to have quite a bit of control through what we call the loyal client, but you also want the platforms to have the ability to use these sorts of things. So you’ve got folks out there who are basically saying, “Oh no, Mastodon is going to become a cesspit of CSAM.” And, you know, there’s some evidence of that. We’re starting to see some pockets of that. The truth is, I don’t think Mastodon is where it’s mostly happening. I think it’s mostly on much more closed channels. But something we’ve seen from day one is that when you have the ability to do user-generated content, you’re going to get pornography and some of that pornography is going to go beyond the bounds of the galley. And you’re going to end up with that line between pornography and other forms of imagery that are legally prohibited. So there’s gotta be some architectural solution, and I think at some point, running a node without having thought about those technical and architectural solutions is going to start feeling deeply irresponsible. And I think there may be ways in which not only does it end up being irresponsible, but people may end up refusing services to you if you’re not putting those basic protections into place. 

York: Do you have a free speech or free expression hero? 

Oh, that’s interesting. I mean I think this one is probably one that a lot of people are going to say, but it’s Maria Ressa. I think the places in which free expression, to me, feel absolutely the most important to defend is in holding power to account. And what Maria was doing with Rappler in the Philippines was trying to hold an increasingly autocratic government responsible for its actions. And in the process found herself facing very serious consequences—imprisonment, loss of employment, those sorts of things—and managed to find a way to turn that fight into something that called an enormous amount of attention to the Duterte government and opened global conversations about how important it is to protect journalistic freedom of expression. So I’m not saying that journalistic freedom of expression is the only freedom of expression that’s important, I think enormous swaths of freedom of expression are important, but I think it’s particularly important. And I think freedom of expression in the face of real power and real consequences is particularly worth lauding and praising. And I think Maria has done something very interesting which is she has implicated a whole bunch of other actors, not just the Philippines government, but also Facebook and also the sort of economic model of surveillance capitalism. And she encouraged people to think about how all of these are playing into freedom of expression conversations. So I think that ability to take a struggle where the consequences for you are very personal and very individual and turn it into a global conversation is incredibly powerful.

Podcast Episode: Chronicling Online Communities

From Napster to YouTube, some of the most important and controversial uses of the internet have been about building community: connecting people all over the world who share similar interests, tastes, views, and concerns. Big corporations try to co-opt and control these communities, and politicians often promote scary narratives about technology’s dangerous influences, but users have pushed back against monopoly and rhetoric to find new ways to connect with each other.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Alex Winter is a leading documentarian of the evolution of internet communities. He joins EFF’s Cindy Cohn and Jason Kelley to discuss the harms of behavioral advertising, what algorithms can and can’t be blamed for, and promoting the kind of digital literacy that can bring about a better internet—and a better world—for all of us. 

In this episode you’ll learn about: 

  • Debunking the monopolistic myth that communicating and sharing data is theft. 
  • Demystifying artificial intelligence so that it’s no longer a “black box” impervious to improvement. 
  • Decentralizing and democratizing the internet so more, diverse people can push technology, online communities, and our world forward. 
  • Finding a nuanced balance between free speech and harm mitigation in social media. 
  • Breaking corporations’ addiction to advertising revenue derived from promoting disinformation. 

Alex Winter is a director, writer and actor who has worked across film, television and theater. Best known on screen for “Bill & Ted’s Excellent Adventure” (1989) and its sequels as well as “The Lost Boys” (1987), “Destroy All Neighbors” (2024) and other films, he has directed documentaries including “Downloaded” (2013) about the Napster revolution; “Deep Web” (2015) about the online black market Silk Road and the trial of its creator Ross Ulbricht; “Trust Machine” (2018) about the rise of bitcoin and the blockchain; and “The YouTube Effect” (2022). He also has directed critically acclaimed documentaries about musician Frank Zappa and about the Panama Papers, the biggest global corruption scandal in history and the journalists who worked in secret and at great risk to break the story.   

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

ALEX WINTER
I think that people keep trying to separate the Internet from any other social community or just society, period. And I think that's very dangerous because I think that it allows them to be complacent and to allow these companies to get more powerful and to have more control and they're disseminating all of our information. Like, that's where all of our news, all of how anyone understands what's going on on the planet. 

And I think that's the problem, is I don't think we can afford to separate those things. We have to understand that it's part of society and deal with making a better world, which means we have to make a better internet.

CINDY COHN
That’s Alex Winter. He’s a documentary filmmaker who is also a deep geek.  He’s made a series of films that chronicle the pressing issues in our digital age.  But you may also know him as William S. Preston, Esquire - aka Bill of the Bill and Ted movies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet. 

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. You know, at EFF we spend a lot of time pointing out the way things could go wrong – and then of course  jumping in to fight when they DO go wrong. But this show is about envisioning – and hopefully helping create – a better future.

JASON KELLEY
Our guest today, Alex Winter, is an actor and director and producer who has been working in show business for most of his life. But as Cindy mentioned, in the past decade or so he has become a sort of chronicler of our digital age with his documentary films. In 2013, Downloaded covered the rise and fall, and lasting impact, of Napster. 2015’s Deep Web – 

CINDY COHN
Where I was proud to be a talking head, by the way. 

JASON KELLEY
– is about the dark web and the trial of Ross Ulbricht who created the darknet market the Silk Road. And 2018’s Trust Machine was about blockchain and the evolution of cryptocurrency. And then most recently, The YouTube Effect looks at the history of the video site and its potentially dangerous but also beneficial impact on the world. That’s not to mention his documentaries on The Panama Papers and Frank Zappa. 

CINDY COHN
Like I said in the intro, looking back on the documentaries you’ve made over the past decade or so, I was struck with the thought that you’ve really become this chronicler of our digital age – you know, capturing some of the biggest online issues, or even shining a light a bit on some of the corners of the internet that people like me might live in, but others might not see so much. . Where does that impulse come from you?

ALEX WINTER
I think partly my age. I came up, obviously, before the digital revolution took root, and was doing a lot of work around the early days of CGI and had a lot of friends in that space. I got my first computer probably in ‘82 when I was in college, and got my first Mac in ‘83, got online by ‘84, dial-up era and was very taken with the nascent online communities at that time, the BBS and Usenet era. I was very active in those spaces. And I'm not at all a hacker, I was an artist and I was more invested in the spaces in that way, which a lot of artists were in the eighties and into the nineties, even before the web.

So I was just very taken with the birth of internet based communities and the fact that it was such a democratized space and I mean that, you know, literally – that it was such an interesting mix of people from around the world who felt free to speak about whatever topics they were interested in, there were these incredible people from around the world who were talking about politics and art and everything  in extremely a robust way.

But I also, um, It really seemed clear to me that this was the beginning of something, and so my interest from the doc side has always been charting the internet in terms of community, and what the impact of that community is on different things, either political or whatever. And that's why my first doc was about Napster, because, you know, fast forward to 1998, which for many people is ancient history, but for us was the future.

And you're still in a modem dial up era and you now have an online community that has over a hundred million people on it in real time around the world who could search each other's hard drives and communicate.  What made me, I think, want to make docs was Napster was the beginning of realizing this disparity between the media or the news or the public's perception of what the internet was and what my experience was.

Where Sean Fanning was kind of being tarred as this pirate and criminal. And while there were obviously ethical considerations with Napster in terms of the  distribution of music, that was not my experience. My experience was this incredibly robust community and that had extreme validity and significance in sort of human scale.

And that's, I think, what really prompted me to start telling stories in this space. I think if anyone's interested in doing anything, including what you all do there, it's because you feel like someone else isn't saying what you want to be said, right? And so you're like, well, I better say it because no one else is saying it. So I think that was the inspiration for me to spend more time in this space telling stories here.

CINDY COHN
That's great. I mean, I do, and the stuff I hear in this is that, you know, first of all, the internet kind of erased distance so you could talk to people all over the world from this device in your home or in one place. And that people were really building community. 

And I also hear this, in terms of Napster, this huge disconnect between the kind of business model view of music, and music fan’s views of music. One of the most amazing things for me was realizing that I could find somebody who had a couple of songs that I really liked and then look at everything else they liked. And it challenged this idea that only kind of professional music critics who have a platform can suggest music to you and opened up a world, like literally felt like something just like a dam broke, and it opened up a world to music. It sounds like that was your experience as well.

ALEX WINTER
It was, and I think that really aptly describes the, the almost addictive fascination that people had with Napster and the confusion, even retrospectively, that that addiction came from theft, from this desire to steal in large quantities. I mean obviously you had kids in college dorm rooms pulling down gigabytes of music but the pull, the attraction to Napster was exactly what you just said – like I would find friends in Japan and Africa and Eastern Europe who had some weird like Coltrane bootleg that I'd never heard and then I was like, oh, what else do they have? And then here's what I have, and I have a very eclectic music collection. 

Then you start talking about art then you start talking about politics because it was a very robust forum So everyone was talking to each other. So it really was community and I think that gets lost because the narrative wants to remain the narrative, in terms of gatekeepers, in terms of how capitalism works, and that power dynamic was so completely threatened by, by Napster that, you know, the wheels immediately cranked into gear to sort of create a narrative that was, if you use this, you're just a terrible human being. 

And of course what it created was the beginning of this kind of online rebellion where people before weren't probably, didn't think of themselves as technical, or even that interested in technology, were saying, well, I'm not this thing that you're saying I am, and now I'm really going to rebel against you. Now I'm really going to dive into this space. And I think that it actually created more people sort of entering online community and building online communities, because they didn't feel like they were understood or being adequately represented.

And that led all the way to the Arab Spring and Occupy, and so many other things that came up after that.

JASON KELLEY
The community's angle that you're talking about is probably really, I think, useful to our audience. Because I think they probably find themselves, I certainly find myself in a lot of the kinds of communities that you've covered. Which often makes me think, like, how is this guy inside my head?

How do you think about the sort of communities that you need to, or want to chronicle. I know you mentioned this disconnect between the way the media covers it and the actual community. But like, I'm wondering, what do you see now? Are there communities that you've missed the boat on covering?

Or things that you want to cover at this moment that just aren't getting the attention that you think they should?

ALEX WINTER
I honestly just follow the things that interest me the most. I don't particularly … look, because I don't see myself as a, you know, in brackets as a chronicler of anything. I'm not that self, you know, I have a more modest view of myself. So I really just respond to the things that I find interesting, that on two tracks, one that I'm personally being impacted by.

So I'm not really like an outsider viewing, like, what will I cover next or what topics should I address, but what's really impacting me personally, I was hugely invested in Napster. I mean, I was going into my office on weekends and powering every single computer up all weekend onto Napster for the better part of a year. I mean, Fanning laughed at me when I met him, but -

CINDY COHN  
Luckily, the statute of limitations may have run on that, that's good.

ALEX WINTER
Yeah, exactly. 

JASON KELLEY  
Yeah, I'm sure you're not alone.

ALEX WINTER
Yeah, but I mean as I told Don Ienner when I did the movie I was like I was like dude I'd already bought all this music like nine times over on vinyl, on cassette, on CD. I think I even had elcasets at one point. So the record industry still owes me money as far as I’m concerned.

CINDY COHN
I agree.

ALEX WINTER
But no, it was really a personal investment. Even, you know, my interest in the blockchain and Bitcoin, which I have mixed feelings about, I really tried to cover that almost more from a political angle. I was interested, same with DeepWeb in a way, but I was interested in how the sort of counter narrators were building online and how people were trying to create systems and spaces online once online became corporatized, which it really did as soon as the web appeared, what did people do in response to the corporatization of these spaces? 

And that's why I was covering Lowry Love's case in England, and eventually Barrett Brown's case, and then the Silk Road, which I was mostly interested in for the same reason as Napster, which was, who were these people, what were they talking about, what drew them to this space, because it was a very clunky, clumsy way to buy drugs, if that was really what you wanted to do, and Bitcoin is a terrible tool for crime, as everyone now, I think, knows, but didn't so well back then.

So what was really compelling people, and a lot of that was, again, it was Silk Road was very much like the sort of alt rec world of the early Usenet days. A lot of divergent voices and politics and, and things like that. 

So YouTube is different because it was, Gayle Ayn Hurd had approached me and asked me if I wanted to tackle this with her, the producer. And I'd been looking at Google, largely. And that was why I had a personal interest. And I've got three boys, all of whom came up in the YouTube generations. They all moved off of regular TV and onto their laptops at a certain point in their childhood, and just were on YouTube for everything.

So I wanted corporatization of the internet, about what was the societal impact of the fact that our, our largest online community, which is YouTube, is owned by arguably the largest corporation on the planet, which is also a monopoly, which is also a black box.

And what does that mean? What are the societal  implications of that? So that was the kind of motive there, but it still was looking at it as a community largely.

CINDY COHN
So the conceit of the show is that we're trying to fix the internet and I want to know, you've done a lot to shine these stories in different directions, but what does it look like if we get it right? What are the things that we will see if we build the kind of online communities that are better than I think the ones that are getting the most attention now.

ALEX WINTER
I think that, you know, I've spent the last two years since I made the film and up until very recently on the road, trying to answer that question for myself, really, because I don't believe I have the answer that I need to bestow upon the world. I have a lot of questions, yeah. I do have an opinion. 

But right now, I mean, I generally feel like many people do that we slept – I mean, you all didn't, but many people slept on the last 20 years, right? And so there's a kind of reckoning now because we let these corporations get away with murder, literally and figuratively. And I think that we're in a phase of debunking various myths, and I think that's going to take some time before we can actually even do the work to make the internet better. 

But I think, you know, I have a big problem, a large thesis that I had in making The YouTube Effect was to kind of debunk the theory of the rabbit hole and the algorithm as being some kind of all encompassing evil. Because I think, sort of like we're seeing in AI now with this rhetoric about AI is going to kill everybody. To me, those are very agenda based narratives. They convince the public that this is all beyond them, and they should just go back to their homes, and keep buying things and eating food, and ignore these thorny areas of which they have no expertise, and leave it to the experts.

And of course, that means the status quo is upheld. The corporations keep doing whatever they want and they have no oversight, which is what they want. Every time Sam Altman says, AI is going to kill the world, he's just saying, Open AI is a black box, please leave us alone and let us make lots of money and go away. And that's all that means. So I think that we have to start looking at the internet and technology as being run by people. There aren't even that many people running it, there's only a handful of people running the whole damn thing for the most part. They have agendas, they have motives, they have political affiliations, they have capitalist orientation.

So I think really being able to start looking at the internet in a much more specific way, I know that you all have been doing this for a long time, most people do not. So I think more of that, more calling people on the carpet, more specificity. 

The other thing that we're seeing, and again, I'm preaching to the choir here with EFF, but like any time the public or the government or the media wakes up to something that they're behind, their inclination of how to fix it is way wrong, right?

And so that's the other place that we're at right now, like with COSA and the DSA and the Section 230 reform discussions, and they're bananas. And you feel like you're screaming into a chasm, right? Because if you say these things, people treat you like you're some kind of lunatic. Like, what do you mean you don't want to turn off Section 230? That would solve everything! I'm like, it wouldn't, it would just break the internet! So I feel a little, you know, like a Cassandra, but you do feel like you're yowling into a void. 

And so I do think that it's going to take a minute to fix the internet. And I think that one of the things that I think we'll get there, I think the new generations are smarter, the stakes are higher for them. You know kids in school… Well, I don't think the internet or social media is necessarily bad for kids, like, full stopping. There's a lot of propaganda there, but I think that, you know, they don't want harms. They want a safer environment for themselves. They don't want to stop using these platforms. They just want them to work better. 

But what's happened in the last couple of years, I think is a good thing, is that people are breaking off and forming their own communities again, even kids, like even my teenagers started doing it during COVID. Even on Discord, they would create their own servers, no one could get on it but them. There was no danger of, like, being infiltrated by crazy people. All their friends were there. They could bring other friends in, they could talk about whatever issues they wanted to talk about. So there's a kind of return to, of kind of fractured or fragmented or smaller set of communities.

And I think if the internet continues to go that way, that's a good thing, right? That you don't have to be on Tik TOK or YouTube or whatever to find your people. And I think for grownups would be the silver lining of what happened with Twitter, with, you know, Elon Musk buying it and immediately turning it into a Nazi crash pad is that the average adult realized they didn't have to be there either, right? That they don't have to just use one place that the internet is filled with little communities that they could go to to talk to their friends. 

So I think we're back in this kind of Wild West like we almost were pre-web and at the beginning of the web and I think that's good.  But I do think there's an enormous amount of misinformation and some very bad policy all over the world that is going to cause a lot of harm.

CINDY COHN
I mean, that's kind of my challenge to you is once we've realized that things are broken, how do we evaluate all the people who are coming in and claiming that they have the fix? And you know, in The YouTube effect, you talked to Carrie Goldberg. She has a lot of passion.

I think she's wrong about the answer. She's, I think, done a very good job illuminating some of the problems, especially for specific communities, people facing domestic violence and doxing and things like that. But she's rushed to a really dangerous answer for the internet overall. 

So I guess my challenge is, how do we help people think critically about not just the problems, but the potential issues with solutions? You know, the TikTok bans are something that's going on across the country now, and it feels like the Napster days, right?

ALEX WINTER
Yeah, totally.

CINDY COHN
People have focused on a particular issue and used it to try to say, Oh, we're just going to ban this. And all the people who use this technology for all the things that are not even remotely related to the problem are going to be impacted by this “ban-first” strategy.

ALEX WINTER
Yeah. I mean, it's media literacy. It's digital literacy. One of the most despairing things for me making docs in this space is how much prejudice there is to making docs in this space. You know, people consider the internet, especially, you know, a huge swath of, because obviously the far right has their agenda, which is just to silence everybody they don't agree with, right? I mean, the left can do the same thing, but the right is very good at it.  

The left, where they make mistakes, or, you know, center to left, is that they're ignorant about how these technologies work, and so their solutions are wrong. We see that over and over. They have really good intentions, but the solutions are wrong, and they don't actually make sense to how these technologies work. We're seeing that in AI. That was an area that I was trying to do as much work as I could in during the The Hollywood strike to educate people about AI'because they were so completely misinformed and their fixes were not fixes. They were not effective and they would not be legally binding. And it was despairing only because it's kind of frowned upon to say anything about technology other than don't use it.

CINDY COHN
Yeah.

ALEX WINTER
Right? Like, even other documentaries are like the thesis is like, well, just, you know, tell your kids they can't be on, like, tell them to read more literature.

Right? And it just drives me crazy because I'm like, I'm a progressive lefty and my kids are all online and guess what? They still read books and like, play music and go outside. So it's this kind of very binary black or white attitude towards technology that like, ‘Oh, it's just bad. Why can't we go back to the days?’

CINDY COHN
And I think there's a false sense that if we just could turn back the clock pre internet, everything was perfect. Right? My friend Cory Doctorow talks about this, like how we need to build the great new world, not the good old world. And I think that's true even for, you know, Internet oldies like you and me who are thinking about maybe the 80s and 90s.

Like, I think we need to embrace where we are now and then build the better world forward. Now, I agree with you strongly about decentralization in smaller communities. As somebody who cares about free speech and privacy, I don't see a way to solve the free speech and privacy problems of the giant platforms.

We're not going to get better dictators. We need to get rid of the dictators and make a lot more smaller, not necessarily smaller, but different spaces, differently governed spaces. But I agree with you that there is this rush to kind of turn back the clock and I think we should try to turn it forward. And again, I kind of want to push you a little bit. What does the turning it forward world look like?

ALEX WINTER
I mean, I have really strong opinions about that. I mean, thankfully, my kids are very tech savvy, like any kid. And I pay attention to what they're doing, and I find it fascinating. And the thing about thinking backwards is that it's a losing proposition. Because the world will leave you behind.

Because the world's not going to go backwards. And the world is only going to go forward. And so you either have a say in what that looks like, or you don't. 

I think two things have to happen. One is media literacy and a sort of weakening of this narrative that it's all bad, so that more people, intelligent people, are getting involved in the future. I think that will help adults get immersed into new technologies and new communities and what's going on. I think at the same time that we have to be working harder to attack the tech monopolies. 

I think being involved as opposed to being, um, abstinent. is really, really important. Um, and I think more of that will happen with new generations, so uh, and because then your eyes and your ears are open, and you'll find new communities and, and the like, but at the same time we have to work much harder at um, uh, this idea that we're allowing the big tech to police themselves is just ludicrous, and there's still the world that we're in, and it just drives me crazy and Uh, you know, they have one agenda, which is profit, and they don't care about anything else, and, and power.

And I think that's the danger of AI. I mean, it's not the, we're not all gonna die by robots. It's just, it's just this sort of capitalist machine is just gonna roll along unchecked. That's the problem, and it will eat labor, and it will eat other companies, and that's the problem.

CINDY COHN  
I mean, I think that's one of the tricky parts about, you know, kind of the, the Sam Altman shift, right, from don't regulate us to please regulate us. Behind that, please regulate us is, you know, and we'll, we'll tell you what the regulations look like because we're the only ones, these giant gurus who can understand enough about it to figure out how to regulate us.

And I just think that's, you know, it's, it's important to recognize that it's a pivot, but I think you could get tricked into thinking that's actually better. And I don't actually think it is.

ALEX WINTER
It’s a 100 percent agenda based. I mean, it's not only not better, it's completely self serving. And I think that as long as we are following these people as opposed to leading them, we're going to have a problem.

CINDY COHN:
Absolutely.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Alex Winter about YouTube.

ALEX WINTER
There's a lot of information there that's of extreme value, medical, artistic,historical, political. In the film, we go to great length to show that Caleb Kane, who got kind of pulled into and, and radicalized, um, by the, the proliferation of far right, um, neo and even neo Nazi and nationalist, uh, white supremacist content, which is still proliferate on YouTube, um, because it really is not algorithm oriented, it’s business and incentive based, how he himself was unindoctrinated by ContraPoints, by Natalie Wynn's channel. 

And you have to understand that, you know, more teenagers watch YouTube than Netflix. Like, it is everything. Iit is by an order of magnitude, so much more of how they spend their time, um, consuming media than anything else. And they're watching their friends talk, they're watching political speakers talk, they're watching, you know, my son who's like, his various interests from photography to weightlifting to whatever, he's young. All of that's coming from YouTube. All of it.

And they're pretty good at discerning the crap from, you know, unless like now it's like a lot of the studies show you have to be generally predisposed to this kind of content to really go down, the sort of darker areas those younger people can be.

You know, I often say that the greatest solution to people who end up getting radicalized on YouTube is more YouTube. Right? Is to find the people on YouTube who are doing good. And I think that's one of the big misunderstandings about disinfo is that you can consume good sources. You just have to find them. And people are actually better at discerning truth from lies if that's really what they want to do as opposed to, like, I just want to get a wash in QAnon or whatever. 

I think YouTube started not necessarily with pure intentions, but I think that they did start with some good intentions in terms of intentionally democratizing the landscape and voices and allowing people in marginalized groups, and under autocratic governments. They allowed and they, and they promoted that content and they created the age of the democratized influencer.

That was intentional. And I would argue that they did a better job of that than my industry did. And I think my industry followed their lead. I think the diversity initiatives in Hollywood came after Hollywood, because Hollywood's Like everyone else is driven by money only and they were like, Oh my God, there are these giant trans and African and Chinese influencers that have huge audiences, we should start allowing more people to have a voice in our business too. Cause we'll make money off of them. But I think that now, YouTube has grown so big and so far beyond them, and it's making them so much money and they're so incentivized to promote disinformation, propaganda, sort of violent, um, content because it, it just makes so much money for them on the ad side, uh, that it's sort of a runaway train at this point.

CINDY COHN
One of the things that EFF has taken a stand on is about banning behavioral advertising. And I think one of the things you did in The YouTube Effect is kind of take a hard look at, you know, how, how big a role the algorithm is actually playing. And I think the movie kind of points that it's not as big a role as people who, uh, who want an easy answer to the problem are, are saying.

We've been thinking about this from the privacy perspective, and we decided that behavioral advertising was behind so many of the problems we had, and I wondered, um, how you think about that, because that is the kind of tracking and targeting that feeds some of those algorithms, but it does a lot more.

ALEX WINTER
Yeah, I think that there's absolutely no doubt for all the hue and cry that they can't moderate their content. And I think that we're beginning, again, this is an area you, you, that you, that EFF specifically specializes in. But I think in terms of the area of free speech, and what constitutes free speech as opposed to what they could actually be doing to mitigate harms is very nuanced.

And it serves them to say that it is not. That it's not nuanced and it's either, either they're going to be shackling free speech or they should be left alone to do whatever they want, which is make money off of advertising, a lot of which is harmful. So I think getting into the weeds on that is extremely important.

You know, a recent example was just how they stopped deplatforming all the Stop the Steal content, which they were doing very successfully. The just flat out  you know, uh, election 2020 election propaganda and, you know, and that gets people hurt. I mean, it can get people killed and it's not, it's really not hard to do, um, but they make more money if they allow this kind of rampant, aggressive, propagandized advertising as well as content on their platform.

I just think that we have to be looking at advertising and how it functions in a very granular way, because these are,  the whole thesis of YouTube, such as we had one, is that this is not about an algorithm, it's about a business model. 

These are business incentives, it's no different, I've been saying this everywhere, it's like, it's exactly the same as, as the, the Hurst and Pulitzer wars of the late 1800s, it's the same. It's just, we want to make money. We know what attracts eyeballs. We want to advertise and make money from ad revenue from pumping out this garbage because people eat it up. It's really similar to that. That doesn't require an algorithm. 

CINDY COHN
My dream is Alex Winter makes a movie that helps us evaluate all the things that people who are worried about the internet are jumping in to say that we ought to do, and helps give people that kind of evaluative  power, because we do see over and over again this rush to go to censorship, which, you know, is problematic, for free expression, but also just won't work, this kind of gliding over the idea that privacy has anything to do with online harms and that standing up for privacy will do anything.

I just feel like sometimes, this literacy place needs to be both about the problems and about critically thinking about the things that are being put forward as solutions.

ALEX WINTER
Yeah, I mean, I've been writing a lot about that for the last two years. I've written, I think, I don't know, countless op eds. And there are way smarter people than me, like you all and Cory Doctorow, writing about this like crazy. And I think all of that is having an impact. I think that we are building the building blocks of proper internet literacy are being set. 

CINDY COHN
Well I appreciate that you've got three kids who are, you know, healthy and happy using the internet because I think those stories get overlooked as well. Not that there aren't real harms. It's just that there's this baby with the bathwater kind of approach that we find in policymaking.

ALEX WINTER
Yeah, completely. So I think that people feel like their arms are being twisted. That they have to say these hyper negative things, or fall in line with these narratives. You know, a movie requires characters, right? And I would need a court case or something to follow to find the way in and I've always got my eyes on that. But I do think we're at it. We're at a kind of a critical point.

It's really funny because when I made this film I'm friends with a lot of different film critics. I've just been around a long time I like, you know reading good film criticism and one of them who I respect greatly was like I don't want to review your movie because I really didn't like it and I don't want to give you a really bad review.

And I said, well, why didn't you like it? It's like, because I did just didn't like your perspective. And I was like, well, what didn't you like about my replicas? Like, well, you just weren't hard enough on YouTube. Like you, you didn't just come right out and say, they're just terrible and no one should be using it.

And I was like, You're the problem. and here's so much of that, um, that I feel like there is a, uh, you know, there's a bias that is going to take time to overcome. No matter what anyone says or whatever film anyone makes, there's just, we just have to kind of keep chipping away at it.

JASON KELLEY
Well, it's a shame we didn't get a chance to talk to him about Frank Zappa. But what we did talk to him about was probably more interesting to our audience. The thing that stood out to me was the way he sees these technologies and sort of focuses his documentaries on the communities that they facilitate.

And that was just sort of a, I think, useful way to think about, you know, everything from the deep web to blockchain to YouTube. To Napster, just like he sees these as building communities and those communities are not necessarily good or bad, but they have some really positive elements and that led him to this really interesting idea of, of a future of smaller communities, which I think, I think we all agree with.

Does that sound sort of like what you pulled away from the conversation, Cindy?

CINDY COHN
I think that's right. And I also think he was really smart at noticing the difference between what it was like to be inside some of those communities and how they got portrayed in broader society. And pointing out that when corporate interests, who were the copyright interests, saw what was happening on Napster, they very quickly put together a narrative that everybody was pirates, that was very different than how it felt to be inside that community and having access to all of that information and that disconnect, you know, what happens when the people who control our broader societal conversation, who are often corporate interests with their own commercial interests at heart.

And what it's like to be inside the communities is what connected the Silk Road story with the Napster story. And in some ways YouTube is interesting because it's actually gigantic. It's not a little corner of the internet, but yet, I think he's trying to lift up, you know, both the issues that we see in YouTube that are problematic, but also all the other things inside YouTube that are not problematic and as he pointed out in the story about Caleb Cain, you know, can be part of the solution to pulling people out of the harms. 

So I really appreciate this focus. I think it really hearkens back to, you know, one of the coolest things about the internet when it first came along was this idea that we could build communities free of distance and outside of the corporate spaces.

JASON KELLEY
Yeah. And the point you're making about his recognition of. Who gets to decide what's to blame, I think leads us right to the conversation around YouTube, which is it's easy to blame the algorithm when what's actually driving a lot of the problems we see with the site are corporate interests and engagement with the kind of content that gets people riled up and also makes a lot of money.

And I just love that he's able to sort of parse out these nuances in a way that surprisingly few people do, um, you know, across media and journalism and certainly in unfortunately government.

CINDY COHN
Yeah, and I think that, you know, it's, it's fun to have a conversation with somebody who kind of gets it at this level about the problems with, and he, you know, name checked issues that EFF has been working on for a long time, whether that's COSA or Section 230 or algorithmic issues. About how wrongheaded the solutions are and how it kind of drives it.

I appreciate that it kind of drives him crazy in the way it drives me crazy that once you've articulated the harms, people seem to rush towards solutions, or at least are pushed towards solutions that are not getting out of this corporate control, but rather in some ways putting us deeper in that.

And he's already seeing that in the AI push for regulation. I think he's exactly right about that. I don't know if I convinced him to make his next movie about all of these solutions and how to evaluate them. I'll have to keep trying. He may not, that may not be where he gets his inspiration.

JASON KELLEY
We'll see, I mean, at least if nothing else, EFF is in many of the documentaries that he has made and my guess is that will continue to be a voice of reason in the ones he makes in the future.

CINDY COHN
I really appreciate that Alex has taken his skills and talents and platforms to really lift up the kind of ordinary people who are finding community online and help us find ways to keep that part, and even lift it up as we move into the future.

JASON KELLEY

Thanks for joining us for this episode of how to fix the internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. 

In this episode you heard Perspectives by J.Lang featuring Sackjo22 and Admiral Bob 

You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

Shots Fired: Congressional Letter Questions DHS Funding of ShotSpotter

There is a growing pile of evidence that cities should drop Shotspotter, the notorious surveillance system that purportedly uses acoustic sensors to detect gunshots, due to its inaccuracies and the danger it creates in communities where it’s installed. In yet another blow to the product and the surveillance company behind it—SoundThinking—Congress members have sent a letter calling on the Department of Homeland Security to investigate how it provides funding to local police to deploy the product.

The seven page letter, from Senators Ed Markey, Ron Wyden and Elizabeth Warren, and Representative Ayanna Pressley, begins by questioning the “accuracy and effectiveness” of ShotSpotter, and then outlines some of the latest evidence of its abysmal performance, including multiple studies showing false positive rates—i.e. incorrectly classifying non-gunshot sounds as gunshots—at 70% or higher. In addition to its ineffectiveness, the Congress members voiced their serious concerns regarding ShotSpotter’s contribution to discrimination, civil rights violations, and poor policing practices due to the installation of most ShotSpotter sensors in overwhelmingly “Black, Brown and Latin[e] communities” at the request of local law enforcement. Together, the inefficacy of the technology and the placements can result in the deployment of police to what they expect to be a dangerous situation with guns drawn, increasing the chances of all-too-common police violence against civilians in the area.

In light of the grave concerns raised by the use of ShotSpotter, the lawmakers are demanding that DHS investigate its funding, and whether it’s an appropriate use of taxpayer dollars. We agree: DHS should investigate, and should end its program of offering grants to local law enforcement agencies to contract with SoundThinking. 

The letter can be read in its entirety here.

Georgia Prosecutors Stoke Fears over Use of Encrypted Messengers and Tor

In an indictment against Defend the Atlanta Forest activists in Georgia, state prosecutors are citing use of encrypted communications to fearmonger. Alleging the defendants—which include journalists and lawyers, in addition to activists—in the indictment were responsible for a number of crimes related to the Stop Cop City campaign, the state Attorney General’s prosecutors cast suspicion on the defendants’ use of Signal, Telegram, Tor, and other everyday data-protecting technologies.

“Indeed, communication among the Defend the Atlanta Forest members is often cloaked in secrecy using sophisticated technology aimed at preventing law enforcement from viewing their communication and preventing recovery of the information” the indictment reads. “Members often use the dark web via Tor, use end-to-end encrypted messaging app Signal or Telegram.”

The secure messaging app Signal is used by tens of millions of people, and has hundreds of millions of global downloads. In 2021, users moved to the nonprofit-run private messenger en masse as concerns were raised about the data-hungry business models of big tech. In January of that year, former world’s richest man Elon Musk tweeted simply “Use Signal.” And world-famous NSA whistle-blower Edward Snowden tweeted in 2016 what in information security circles would become a meme and truism: “Use Tor. Use Signal.”

Despite what the bombastic language would have readers believe, installing and using Signal and Tor is not an initiation rite into a dark cult of lawbreaking. The “sophisticated technology” being used here are apps that are free, popular, openly distributed, and widely accessible by anyone with an internet connection. Going further, the indictment ascribes the intentions of those using the apps as simply to obstruct law enforcement surveillance. Taking this assertion at face value, any judge or reporter reading the indictment is led to believe everyone using the app simply wants to evade the police. The fact that these apps make it harder for law enforcement to access communications is exactly because the encryption protocol protects messages from everyone not intended to receive them—including the users’ ISP, local network hackers, or the Signal nonprofit itself.

Elsewhere, the indictment hones in on the use of anti-surveillance techniques to further its tenuous attempts to malign the defendants: “Most ‘Forest Defenders’ are aware that they are preparing to break the law, and this is demonstrated by premeditation of attacks.” Among a laundry list of other techniques, the preparation is supposedly marked by “using technology avoidance devices such as Faraday bags and burner phones.” Stoking fears around the use of anti-surveillance technologies sets a dangerous precedent for all people who simply don’t want to be tracked wherever they go. In protest situations, carrying a prepaid disposable phone can be a powerful defense against being persecuted for participating in first-amendment protected activities. Vilifying such activities as the acts of wrongdoers would befit totalitarian societies, not ones in which speech is allegedly a universal right.

To be clear, prosecutors have apparently not sought to use court orders to compel either the defendants or the companies named to enter passwords or otherwise open devices or apps. But vilifying the defendants’ use of common sense encryption is a dangerous step in cases that the Dekalb County District Attorney has already dropped out of, citing “different prosecutorial philosophies.”

Using messengers which protect user communications, browsers which protect user anonymity, and employing anti-surveillance techniques when out and about are all useful strategies in a range of situations. Whether you’re looking into a sensitive medical condition, visiting a reproductive health clinic with the option of terminating a pregnancy, protecting trade secrets from a competitor, wish to avoid stalkers or abusive domestic partners, protecting attorney-client exchanges, or simply want to keep your communications, browsing, and location history private, these techniques can come in handy. It is their very effectiveness which has led to the widespread adoption of privacy-protective technologies and techniques. When state prosecutors spread fear around the use of these powerful techniques, this sets us down a dangerous path where citizens are more vulnerable and at risk.

Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech 

As Congress appears ready to gut one of the internet’s most important laws for protecting free speech, they are ignoring how that law protects and benefits millions of Americans’ ability to speak online every day.  

The House Energy and Commerce Committee is holding a hearing on Wednesday on a bill that would end 47 U.S.C. § 230 (“Section 230”) in 18 months. The authors of the bill argue that setting a deadline to either change or eliminate Section 230 will force the Big Tech online platforms to the bargaining table to create a new regime of intermediary liability.  

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it needs to abandon this bill and spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

The bill introduced by House Energy & Commerce Chair Cathy McMorris Rogers (R-WA) and Ranking Member Frank Pallone (D-NJ) is based on a series of mistaken assumptions and fundamental misunderstandings about Section 230. Mike Masnick at TechDirt has already explained many of the flawed premises and factual errors that the co-sponsors have made. 

We won’t repeat the many errors that Masnick identifies. Instead, we want to focus on what we see as a glaring omission in the co-sponsor’s argument: how central Section 230 is to ensuring that every person can speak online.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most American’s belief that people should be held responsible for their own speech—not that of other people.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services. 

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. 

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

The co-sponsors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

Nothing could be further from the truth. If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible.   

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. Taking the sponsors of the bill at their word, this would result in the exact opposite of their goal of protecting children and adults from harmful content online.  

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

EFF Urges Ninth Circuit to Hold Montana’s TikTok Ban Unconstitutional

Montana’s TikTok ban violates the First Amendment, EFF and others told the Ninth Circuit Court of Appeals in a friend-of-the-court brief and urged the court to affirm a trial court’s holding from December 2023 to that effect.

Montana’s ban (which EFF and others opposed) prohibits TikTok from operating anywhere within the state and imposes financial penalties on TikTok or any mobile application store that allows users to access TikTok. The district court recognized that Montana’s law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” and blocked Montana’s ban from going into effect. Last year, EFF—along with the ACLU, Freedom of the Press Foundation, Reason Foundation, and the Center for Democracy and Technology—filed a friend-of-the-court brief in support of TikTok and Montana TikTok users’ challenge to this law at the trial court level.

As the brief explains, Montana’s TikTok ban is a prior restraint on speech that prohibits Montana TikTok users—and TikTok itself—from posting on the platform. The law also prohibits TikTok’s ability to make decisions about curating its platform.

Prior restraints such as Montana’s ban are presumptively unconstitutional. For a court to uphold a prior restraint, the First Amendment requires it to satisfy the most exacting scrutiny. The prior restraint must be necessary to further an urgent interest of the highest magnitude, and the narrowest possible way for the government to accomplish its precise interest. Montana’s TikTok ban fails to meet this demanding standard.

Even if the ban is not a prior restraint, the brief illustrates that it would still violate the First Amendment. Montana’s law is a “total ban” on speech: it completely forecloses TikTok users’ speech with respect to the entire medium of expression that is TikTok. As a result, Montana’s ban is subject to an exacting tailoring requirement: it must target and eliminate “no more than the exact source of the ‘evil’ it seeks to remedy.” Montana’s law is undeniably overbroad and fails to satisfy this scrutiny.

This appeal is happening in the immediate aftermath of President Biden signing into law federal legislation that effectively bans TikTok in its current form, by requiring TikTok to divest of any Chinese ownership within 270 days. This federal law raises many of the same First Amendment concerns as Montana’s.

It’s important that the Ninth Circuit take this opportunity to make clear that the First Amendment requires the government to satisfy a very demanding standard before it can impose these types of extreme restrictions on Americans’ speech.

Fair Use Still Protects Histories and Documentaries—Even Tiger King

Copyright’s fair use doctrine protects lots of important free expression against the threat of ruinous lawsuits. Fair use isn’t limited to political commentary or erudite works – it also protects popular entertainment like Tiger King, Netflix’s hit 2020 documentary series about the bizarre and sometimes criminal exploits of a group of big cat breeders. That’s why a federal appeals court’s narrow interpretation of fair use in a recent copyright suit threatens not just the producers of Tiger King but thousands of creators who make documentaries, histories, biographies, and even computer software. EFF and other groups asked the court to revisit its decision. Thankfully, the court just agreed to do so.

The case, Whyte Monkee Productions v. Netflix, was brought by a videographer who worked at the Greater Wynnewood Exotic Animal Park, the Oklahoma attraction run by Joe Exotic that was chronicled in Tiger King. The videographer sued Netflix for copyright infringement over the use of his video clips of Joe Exotic in the series. A federal district court in Oklahoma found Netflix’s use of one of the video clips—documenting Joe Exotic’s eulogy for his husband Travis Maldonado—to be a fair use. A three-judge panel of the Court of Appeals for the Tenth Circuit reversed that decision and remanded the case, ruling that the use of the video was not “transformative,” a concept that’s often at the heart of fair use decisions.

The appeals court based its ruling on a mistaken interpretation of the Supreme Court’s opinion in Andy Warhol Foundation for the Visual Arts v. Goldsmith. Warhol was a deliberately narrow decision that upheld the Supreme Court’s prior precedents about what makes a use transformative while emphasizing that commercial uses are less likely to be fair. The Supreme Court held that commercial re-uses of a copyrighted work—in that case, licensing an Andy Warhol print of the artist Prince for a magazine cover when the print was based on a photo that was also licensed for magazine covers—required a strong justification. The Warhol Foundation’s use of the photo was not transformative, the Supreme Court said, because Warhol’s print didn’t comment on or criticize the original photograph, and there was no other reason why the foundation needed to use a print based on that photograph in order to depict Prince. In Whyte Monkee, the Tenth Circuit honed in on the Supreme Court’s discussion about commentary and criticism but mistakenly read it to mean that only uses that comment on an original work are transformative. The court remanded the case to the district court to re-do the fair use analysis on that basis.

As EFF, along with Authors Alliance, American Library Association, Association of Research Libraries, and Public Knowledge explained in an amicus brief supporting Netflix’s request for a rehearing, there are many kinds of transformative fair uses. People creating works of history or biography frequently reproduce excerpts from others’ copyrighted photos, videos, or artwork as indispensable historical evidence. For example, using sketches from the famous Zapruder film in a book about the assassination of President Kennedy was deemed fair, as was reproducing the artwork from Grateful Dead posters in a book about the band. Software developers use excerpts from others’ code—particularly declarations that describe programming interfaces—to build new software that works with what came before. And open government organizations, like EFF client Public.Resource.Org, use technical standards incorporated into law to share knowledge about the law. None of these uses involves commentary or criticism, but courts have found them all to be transformative fair uses that don’t require permission.

The Supreme Court was aware of these uses and didn’t intend to cast doubt on their legality. In fact, the Supreme Court cited to many of them favorably in its Warhol decision. And the Court even engaged in some non-commentary fair use itself when it included photos of Prince in its opinion to illustrate how they were used on magazine covers. If the Court had meant to overrule decades of court decisions, including its own very recent Google v. Oracle decision about software re-use, it would have said so.

Fortunately, the Tenth Circuit heeded our warning, and the warnings of Netflix, documentary filmmakers, legal scholars, and the Motion Picture Association, all of whom filed briefs. The court vacated its decision and asked for further briefing about Warhol and what it means for documentary filmmakers.

The bizarre story of Joe Exotic and his friends and rivals may not be as important to history as the Kennedy assassination, but fair use is vital to bringing us all kinds of learning and entertainment. If other courts start treating the Warhol decision as a radical rewriting of fair use law when that’s not what the Supreme Court said at all, many kinds of free expression will face an uncertain future. That’s why we’re happy that the Tenth Circuit withdrew its opinion. We hope the court will, as the Supreme Court did, reaffirm the importance of fair use.

The Cybertiger Strikes Again! EFF's 8th Annual Tech Trivia Night

Being well into spring, with the weather getting warmer, we knew it was only a matter of time till the Cybertiger awoke from his slumber. But we were prepared. Prepared to quench the Cybertiger's thirst for tech nerds to answer his obscure and fascinating minutiae of tech-related questions.

But how did we prepare for the Cybertiger's quiz? Well, with our 8th Annual Tech Trivia Night of course! We gathered fellow digital freedom supporters to test their tech-know how, and to eat delicious tacos, churros, and special tech-themed drinks, including LimeWire, Moderated Content, and Zero Cool.

Nine teams gathered before the Cybertiger, ready to battle for the *new* wearable first, second, and third place prizes:

EFF's Tech Trivia Awards! An acrylic award with an image of a blue/pink tiger.

But this year, the Cybertiger had a surprise up his sleeve! A new way to secure points had been added: bribes. Now, teams could donate to EFF to sway the judges and increase their total points to secure their lead. Still, the winner of the first-place prize was the Honesty Winner, so participants needed to be on their A-game to win!

At the end of round two of six, team Bad @ Names and 0x41434142 were tied for first place, making a tense game! It wasn’t until the bonus question after round two, where the Cybertiger asked each team, “What prompt would you use to jailbreak the Cybertiger AI?” where the team Bad @ Names came in first place with their answer.

By the end of round 4, Bad @ Names was still in first place, only in the lead by three points! Could they win the bonus question again? This time, each team was asked to create a ridiculous company elevator pitch that would be on the RSA expo floor. (Spoiler alert: these company ideas were indeed ridiculous!)

After the sixth round of questions, the Cybertiger gave one last chance for teams to scheme their way to victory! The suspense built, but after some time, we got our winners... 

In third place, AI Hallucinations with 60 total points! 

In second place, and also winning the bribery award, 0x41434142, with 145 total points!

In first place... Bad @ Names with 68 total points!

EFF’s sincere appreciation goes out to the many participants who joined us for a great quiz over tacos and drinks while never losing sight of EFF’s mission to drive the world towards a better digital future. Thank you to the digital freedom supporters around the world helping to ensure that EFF can continue working in the courts and on the streets to protect online privacy and free expression.

Thanks to EFF's Luminary Organizational Members DuckDuckGo, No Starch Press, and the Hering Foundation for their year-round support of EFF's mission. If you or your company are interested in supporting a future EFF event, or would like to learn more about Organizational Membership, please contact Tierney Hamilton.

Learn about upcoming EFF events when you sign up for our email list, or just check out our event calendar. We hope to see you soon!

Coalition to Calexico: Think Twice About Reapproving Border Surveillance Tower Next to a Public Park

Update May 15, 2024: The letter has been updated to include support from the Southern Border Communities Coalition. It was re-sent to the Calexico City Council. 

On the southwest side of Calexico, a border town in California’s Imperial Valley, a surveillance tower casts a shadow over a baseball field and a residential neighborhood. In 2000, the Immigration and Naturalization Service (the precursor to the Department of Homeland Security (DHS)) leased the corner of Nosotros Park from the city for $1 a year for the tower. But now the lease has expired, and DHS component Customs & Border Protection (CBP) would like the city to re-up the deal 

Map of Nosotros park with location of tower

But times—and technology—have changed. CBP’s new strategy calls for adopting powerful artificial intelligence technology to not only control the towers, but to scan, track and categorize everything they see.  

Now, privacy and social justice advocates including the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition have joined EFF in sending the city council a letter urging them to not sign the lease and either spike the project or renegotiate it to ensure that civil liberties and human rights are protected.  

The groups write 

The Remote Video Surveillance System (RVSS) tower at Nosotros Park was installed in the early 2000s when video technology was fairly limited and the feeds required real-time monitoring by human personnel. That is not how these cameras will operate under CBP's new AI strategy. Instead, these towers will be controlled by algorithms that will autonomously detect, identify, track and classify objects of interest. This means that everything that falls under the gaze of the cameras will be scanned and categorized. To an extent, the AI will autonomously decide what to monitor and recommend when Border Patrol officers should be dispatched. While a human being may be able to tell the difference between children playing games or residents getting ready for work, AI is prone to mistakes and difficult to hold accountable. 

In an era where the public has grave concerns on the impact of unchecked technology on youth and communities of color, we do not believe enough scrutiny and skepticism has been applied to this agreement and CBP's proposal. For example, the item contains very little in terms of describing what kinds of data will be collected, how long it will be stored, and what measures will be taken to mitigate the potential threats to privacy and human rights. 

The letter also notes that CBP’s tower programs have repeatedly failed to achieve the promised outcomes. In fact, the DHS Inspector General found that the early 2000s program,yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts.”  

The groups are calling for Calexico to press pause on the lease agreement until CBP can answer a list of questions about the impact of the surveillance tower on privacy and human rights. Should the city council insist on going forward, they should at least require regular briefings on any new technologies connected to the tower and the ability to cancel the lease on much shorter notice than the 365 days currently spelled out in the proposed contract.  

One (Busy) Day in the Life of EFF’s Activism Team

EFF is an organization of lawyers, technologists, policy professionals, and importantly–full-time activists–who fight to make sure that technology enhances rather than threatens civil liberties on a global scale. EFF’s activism team includes experienced issue experts, master communicators, and grassroots organizers who help to coordinate and orchestrate EFF’s activist campaigns that include but go well beyond litigation, technical analyses and solutions, and direct lobbying to legislators.

If you’ve ever wondered what it would be like to work on the activism team at EFF, or if you are curious about applying for a job at EFF, take a look at one exceptional (but also fairly ordinary) day in the life of five members of the team:

Jillian York, Director For International Freedom of Expression

I wake up around 9:00, make coffee, and check my email and internal messages (we use Mattermost, a self-hosted chat tool). I live in Berlin—between four and nine hours ahead of most of my colleagues—which on most days enables me to get some “deep work” done before anyone else is online.

I see that one of my colleagues in San Francisco left a late-night message asking for someone to edit a short blog post. No one else is awake yet, so I jump on it. I then work on a piece of writing of my own, documenting the case of Alaa Abd El Fattah, an Egyptian technologist, blogger, and EFF supporter who’s been imprisoned on and off for the past decade. After that, I respond to some emails and messages from colleagues from the day prior.

EFF offers us flexible hours, and since I’m in Europe I often have to take calls in the evening (6 or 7 pm my time is 9 or 10 am San Francisco time, when a lot of team meetings take place). I see this as an advantage, as it allows me to meet a friend for lunch and hit the gym before heading back to work. 

There’s a dangerous new bill being proposed in a country where we don’t have so much expertise, but which looks likely to have a greater impact across the region, so a colleague and I hop on a call with a local digital rights group to plan a strategy. When we work internationally, we always consult or partner with local groups to make sure that we’re working toward the best outcome for the local population.

While I’m on the call, my Signal messages start blowing up. A lot of the partners we work with in another region of the world prefer to organize there for reasons of safety, and there’s been a cyberattack on a local media publication. Our partners are looking for some assistance in dealing with it, so I send some messages to colleagues (both at EFF and other friendly organizations) to get them the right help.

After handling some administrative tasks, it’s time for the meeting of the international working group. In that group, we discuss threats facing people outside the U.S., often in areas that are underrepresented by both U.S. and global media.

After that meeting, it's off to prep for a talk I'll be giving at an upcoming conference. There have been improvements in social media takedown transparency reporting, but there are a lot of ways to continue that progress, and a former colleague and I will be hosting a mock game show about the heroes and anti-heroes of transparency. By the time I finish that, it's nearly 11 pm my time, so it's off to bed for me, but not for everyone else!

Matthew Guariglia, Senior Policy Analyst Responsible for Government Surveillance Advocacy

My morning can sometimes start surprisingly early. This morning, a reporter I often speak to called to if I had any comments about a major change to how Amazon Ring security cameras will allow police to request access to user’s footage. I quickly try to make sense of the new changes—Amazon’s press release doesn’t say nearly enough.  Giving a statement to the press requires a brief huddle between me, EFF’s press director, and other lawyers, technologists, and activists who have worked on our Ring campaign over the last few years. Soon, we have a statement that conveys exactly what we think Amazon needs to do differently, and what users and non-users should know about this change and its impact on their rights.. About an hour after that, we turn our brief statement into a longer blog post for everyone to read. 

For the rest of the day now, in between other obligations and meetings, I take press calls or do TV interviews from curious reporters asking whether this change in policy is a win for privacy. My first meeting is with representatives of about a dozen mostly-local groups in the Bay Area, where EFF is located, about the next steps for opposing Proposition E, a ballot measure that greatly reduces the amount of oversight on the San Francisco Police Department concerning what technology they use. I send a few requests to our design team about printing window signs and then talk with our Activism Director about making plans to potentially fly a plane over the city. Shortly after that, I’m in a coalition meeting of national civil liberties organizations discussing ways of keeping a clean reauthorization of Section 702 (a mass surveillance authority that expires this year) out of a must-pass bill that would continue to fund the government. 

In the afternoon, I watch and take notes as a Congressional committee holds a hearing about AI use in law enforcement. Keeping an eye on this allows me to see what arguments and talking points law enforcement is using, which members of Congress seem critical of AI use in policing and might be worth getting in touch with, and whether there are any revelations in the hearing that we should communicate to our members and readers. 

After the hearing, I have to briefly send notes to a Senator and their staff on a draft of a public letter they intend to send to industry leaders about data collection—and when law enforcement may or may not request access to stored user data. 

Tomorrow,  I’ll follow up on many of the plans made over the course of this day: I’ll need to send out a mass email to EFF supporters in the Bay Area rallying them to join in the fight against Proposition E, and review new federal legislation to see if it offers enough reform of Section 702 that EFF might consider supporting it. 

Hayley Tsukayama, Associate Director of Legislative Activism

I settle in with a big mug of tea to start a day full of online meetings. This probably sounds boring to a lot of people, but I know I'll have a ton of interesting conversations today.

Much of my job coordinating our state legislative work requires speaking with like-minded organizations across the country. EFF tries, but we can't be everywhere we want to be all of the time. So, for example, we host a regular call with groups pushing for stronger state consumer data privacy laws. This call gives us a place to share information about a dozen or more privacy bills in as many states. Some groups on the call focus on one state; others, like EFF, work in multiple states. Our groups may not agree on every bill, but we're all working toward a world where companies must respect our privacy by default.

You know, just a small goal.

Today, we get a summary of a hearing that a friendly lawmaker organized to give politicians from several states a forum to explain how big tech companies, advertisers, and data brokers have stymied strong privacy legislation. This is one reason we compare notes: the more we know about what they're doing, the better we can fight them—even though the other side has more money and staff for state legislative work than all of us combined.

From there, I jump to a call on emerging AI legislation in states. Many companies pushing weak AI regulation make software that monitors employees, so this work has connected me to a universe of labor advocates I've never gotten to work with before. I've learned so much from them, both about how AI affects working conditions and about the ways they organize and mobilize people. Working in coalitions shows me how different people bring their strengths to a broader movement.

At EFF, our activists know: we win with words. I make a note to myself to start drafting a blog post on some bad copy-paste AI bills showing up across the country, which companies have carefully written to exempt their own products.

My position lets me stick my nose into almost every EFF issue, which is one thing I love about it. For the rest of the day, I meet with a group of right-to-repair advocates whose decades of advocacy have racked up incredible wins in the past couple of years. I update a position letter to the California legislature about automotive data. I send a draft action to one of our lawyers—who I get to work with every day— about a great Massachusetts bill that would prohibit the sale of location data without permission. I debrief with two EFF staffers who testified this week in Sacramento on two California bills—one on IP issues, another on police surveillance. I polish a speech I'm giving with one of my colleagues, who has kindly made time to help me. I prep for a call with young activists who want to discuss a bill idea.

There is no "typical" day in my job. The one constant is that I get to work with passionate people, at EFF and outside of it, who want to make the world a better place. We tackle tough problems, big and small—but always ones that matter. And, sure, I have good days and bad days. But I can say this: they are rarely boring.

Rory Mir, Associate Director of Community Organizing 

As an organizer at EFF, I juggle long-term projects and needs with rapid responses for both EFF and our local allies in our grassroots network, Electronic Frontier Alliance. Days typically start with morning rituals that keep me grounded as a remote worker: I wake up, make coffee, put on music. I log in, set TODOs, clear my inbox. I get dressed, check the news, morning dog walk..

Back at my desk, I start with small tasks—reach out to a group I met at a conference, add an event to the EFF calendar, and promote EFA events on social media. Then, I get a call from a Portland EFA group. A city ordinance shedding light on police use of surveillance tech needs support. They’re working on a coalition letter EFF can sign, so I send it along to our street level surveillance team, schedule a meeting, and reach out to aligned groups in PDX.

Next up is a policy meeting on consumer privacy. Yesterday in Congress, the House passed a bill undermining privacy (again) and we need to kill it (again). We discuss key Senate votes, and I remember that an EFA group had a good relationship with one of those members in a campaign last year. I reach out to the group with links on our current campaign and see if they can help us lobby on the issue.

After a quick vegan lunch, I start a short Deeplinks post celebrating a major website connecting to the Fediverse, promoting folks autonomy online. I’m not quite done in time for my next meeting, planning an upcoming EFA meetup with my team. Before we get started though, an urgent message from San Diego interrupts us—the city council moved a crucial hearing on ALPRs to tomorrow. We reschedule and pivot to drafting an action alert email for the area as well as social media pushes to rally support.

In the home stretch, I set that meeting with Portland groups and make sure our newest EFA member has information on our workshop next week. After my last meeting for the day, a coalition call on Right to Repair (with Hayley!), I send my blog to a colleague for feedback, and wrap up the day in one of our off-topic chats. While passionately ranking Godzilla movies, my dog helpfully reminds me it’s time to log off and go on another walk.

Thorin Klosowski, Security and Privacy Activist

I typically start my day with reading—catching up on some broad policy things, but just as often poking through product-related news sites and consumer tech blogs—so I can keep an eye out for any new sorts of technology terrors that might be on the horizon, privacy promises that seem too good to be true, or any data breaches and other security guffaws that might need to be addressed.

If I’m lucky (or unlucky, depending on how you look at it), I’ll find something strange enough to bring to our Public Interest Technology crew for a more detailed look. Maybe it’ll be the launch of a new feature that promises privacy but doesn’t seem to deliver it, or in rare cases, a new feature that actually seems to. In either instance, if it seems worth a closer look, I’ll often then chat through all this with the technologists who specialize in the technology at play, then decide whether it’s worth writing something, or just keeping in our deep log of “terrible technologies to watch out for.” This process works in reverse, too—where someone on the PIT team brings up something they’re working on, like sketchyware on an Android tablet, and we’ll brainstorm some ways to help people who’re stuck with these types of things make them less sucky.

Today, I’m also tagging along with a couple of members of the PIT team at a meeting with representatives from a social media company that’s rolling out a new feature in its end-to-end encryption chat app. The EFF technologists will ask smart, technical questions and reference research papers with titles like, “Unbreakable: Designing for Trustworthiness in Private Messaging” while I furiously take notes and wonder how on earth we’ll explain all the positive (or negative) effects on individual privacy this feature might pose if it does in fact release.

With whatever time I have left, I’ll then work on Surveillance Self-Defense, our guide to protecting you and your friends from online spying. Today, I’m working through updating several of our encryption guides, which means chatting with our resident encryption experts both on the legal and PIT teams. What makes SSD so good, in my eyes, is how much knowledge backs every single word of every guide. This is what sets SSD apart from the graveyard of security guides online, but it also means a lot of wrangling to get eyes on everything that goes on the site. Sometimes a guide update clicks together smoothly and we update things quickly. Sometimes one update to a guide cascades across a half dozen others, and I start to feel like I have one of those serial killer boards, but I’m keeping track of several serial killers across multiple timelines. But however an SSD update plays out, it all needs to get translated, so I’ll finish off the day with a look at a spreadsheet of all the translations to make sure I don’t need to send anything new over (or just as often, realize I’ve already gotten translations back that need to put online).

*****

We love giving people a picture of the work we do on a daily basis at EFF to help protect your rights online. Our former Activism Directors, Elliot Harmon and Rainey Reitman, each wrote one of these blogs in the past as well. If you’d like to join us on the EFF Activism Team, or anywhere else in the organization, check out opportunities to do so here.

Speaking Freely: Mohamed El Gohary

Interviewer: Jillian York

Mohamed El Gohary is an open-knowledge enthusiast. After majoring in Biomedical Engineering in October 2010, he switched careers to work as a Social Media manager for Al-Masry Al-Youm newspaper until October 2011, when he joined Global Voices contracts managing Lingua until the end of 2021. He now works for IFEX as the MENA Network Engagement Specialist.

This interview has been edited for length and clarity.*

York: What does free speech or free expression mean for you?

Free speech, for me, freedom of expression, means the ability for people to govern themselves. It means to me that the real meaning of democracy can not happen without freedom of speech, without people expressing their needs in different spectrums. The idea of civic space, the idea of people basically living their lives and using different means of communication for getting things done right through freedom of speech.

York: What’s an experience that shaped your views on freedom of expression?

Well, my background is using the internet. So I always believed, in the early days of using the internet, that it would enable people to express themselves in a way for a better democratic process. But right now that changed because of the decentralization of online spaces to centralized spaces which are the antithesis of democracy. So the internet turns into an oligarch’s world. Which is, again, going back to freedom of expression. I think there are ways that are unchartered territories in terms of activism, in terms of platforms online and offline, to maybe reinvent the wheel in a way for people to have a better democratic process in terms of freedom of expression. 

York: You came up in an era where social media had so much promise, and now, like you said about the oligarchical online space—which I tend to agree with—we’re in kind of a different era. What are your views right now on regulation of social media?

Well, it’s still related to the democratic process. It’s a similar conversation to, let’s say, the Internet Governance Forum where… where is the decision making? Who has the power dynamics around decision making? So there are governments, then there are private companies, then there is law and the rule of law, and then there is civil society. And there’s good civil society and there’s bad civil society, in terms of their relationship with both governments and companies. So it goes back to freedom of expression as a collective and in an individual manner. And it comes to people and freedom of assembly in terms of absolute right and in terms of practice, to reinvent the democratic process. It’s the whole system. It turns out it’s not just freedom of expression. Freedom of expression has an important role, and the democratic process can’t be reinvented without looking at freedom of expression. The whole system, democracy, Western democracy and how different countries apply it in ways that affects and creates the power of the rich and powerful while the rest of the population just loses their hope in different ways. Everything goes back to reinventing the democratic process. And freedom of expression is a big part of it.

York: So this is a special interview, we’re here at the IFEX general meeting. What are some of the things that you’re seeing here, either good or bad, and maybe even what are some things that give you hope about the IFEX network?

I think, inside the IFEX network and the extended IFEX network, it’s the importance of connection. It’s the importance of collaboration. Different governments try to always work together to establish their power structures, while the resources governments have is not always available to civil society. So it’s important for civil society organizations—and IFEX is an example of collaboration between a large number of organizations around the world—in all scales, in all directions, that these kinds of collaborations happen in different organizations while still encouraging every organization in itself to look at itself, to look at itself as an organization, to look at how it’s working. To ask themselves, is it just a job? Are we working for a cause? Are we working for a cause in the right way? It’s the other side of the coin to how governments work and maintain existing power structures. There needs to be the other side of the coin in terms of, again, reinventing the democratic process.

York: Is there anything I didn’t ask that you want to mention?

My only frustration is where organizations work as if it is a job, and they only do the minimum, for example. And that’s in a good case scenario. A bad case scenario is when a civil society organization is working for the government or for private companies—where organizations can be a burden more than a resource. I don’t know how to approach that without cost. Cost is difficult, cost is expensive, it’s ugly, it’s not something you look for when you start your day. And there is a very small number of people and organizations who would be willing to even think about paying the price of being an inconvenience to organizations that are burdening entities. That would be my immediate and long term frustration with civil society at least in my vicinity.

Who is your free speech hero?

For me, as an Egyptian, that would be Alaa Abd El-Fattah. As a person who is a perfect example of looking forward to being an inconvenience. And there are not a lot of people who would be this kind of inconvenience. There are many people who appear like they are an inconvenience, but they aren’t really. This would be my hero.

Big Tech to EU: "Drop Dead"

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so. 

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days. 

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could. 

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK. 

 

 

Victory! FCC Closes Loopholes and Restores Net Neutrality

Thanks to weeks of the public speaking up and taking action the FCC has recognized the flaw in their proposed net neutrality rules. The FCC’s final adopted order on net neutrality restores bright line rules against all forms of throttling, once again creating strong federal protections for all Americans.

The FCC’s initial order had a narrow interpretation of throttling that could have allowed ISPs to create so-called fast lanes, speeding up access to certain sites and services and effectively slowing down other traffic flowing through your network. The order’s bright line rule against throttling now explicitly bans this kind of conduct, finding that the “decision to speed up ‘on the basis of Internet content, applications, or services’ would ‘impair or degrade’ other content, applications, or services which are not given the same treatment.” With this language, the order both hews more closely to the 2015 Order and further aligns with the strong protections Californians already enjoy via California’s net neutrality law.

As we celebrate this victory, it is important to remember that net neutrality is more than just bright line rules against blocking, throttling, and paid prioritization: It is the principle that ISPs should treat all traffic coming over their networks without discrimination. Customers, not ISPs, should decide for themselves how they would like to experience the internet. EFF—standing with users, innovators, creators, public interest advocates, libraries, educators and everyone else who relies on the open internet—will continue to champion this principle. 

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

❌