Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

EFF and IFPTE Local 20 Attain Labor Contract

First-Ever, Three-Year Pact Protects Workers’ Pay, Benefits, Working Conditions, and More

SAN FRANCISCO—Employees and management at the Electronic Frontier Foundation have achieved a first-ever labor contract, they jointly announced today.  EFF employees have joined the Engineers and Scientists of California Local 20, IFPTE.  

The EFF bargaining unit includes more than 60 non-management employees in teams across the organization’s program and administrative staff. The contract covers the usual scope of subjects including compensation; health insurance and other benefits; time off; working conditions; nondiscrimination, accommodation, and diversity; hiring; union rights; and more. 

"EFF is its people. From the moment that our staff decided to organize, we were supportive and approached these negotiations with a commitment to enshrining the best of our practices and adopting improvements through open and honest discussions,” EFF Executive Director Cindy Cohn said. “We are delighted that we were able to reach a contract that will ensure our team receives the wages, benefits, and protections they deserve as they continue to advance our mission of ensuring that technology supports freedom, justice and innovation for all people of the world.” 

“We’re pleased to have partnered with EFF management in crafting a contract that helps our colleagues thrive both at work and outside of work,” said Shirin Mori, a member of the EFF Workers Union negotiating team. “This contract is a testament to creative solutions to improve working conditions and benefits across the organization, while also safeguarding the things employees love about working at EFF. We deeply appreciate the positive working relationship with management in establishing a strong contract.” 

The three-year contract was ratified unanimously by EFF’s board of directors Sept. 18, and by 96 percent of the bargaining unit on Sept. 25. It is effective Oct. 1, 2024 through Sept. 30, 2027. 

EFF is the largest and oldest nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.  

The Engineers and Scientists of California Local 20, International Federation of Professional and Technical Engineers, is a democratic labor union representing more than 8,000 engineers, scientists, licensed health professionals, and attorneys at PG&E, Kaiser Permanente, the U.S. Environmental Protection Agency, Legal Aid at Work, numerous clinics and hospitals, and other employers throughout Northern California.  

For the contract: https://ifpte20.org/wp-content/uploads/2024/10/Electronic-Frontier-Foundation-2024-2027.pdf 

For more on IFPTE Local 20: https://ifpte20.org/ 

Podcast Episode Rerelease: So You Think You’re A Critical Thinker

This episode was first released in March 2023.

With this year’s election just weeks away, concerns about disinformation and conspiracy theories are on the rise.

We covered this issue in a really enlightening talk in March 2023 with Alice Marwick, the director of research at Data & Society, and previously the cofounder and principal researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill.

We talked with Alice about why seemingly ludicrous conspiracy theories get so many followers, and when fact-checking does and doesn’t work. And we came away with some ideas for how to identify and leverage people’s commonalities to stem disinformation, while making sure that the most marginalized and vulnerable internet users are still empowered to speak out.

We thought this is a good time to re-publish that episode, in hopes that it might help you make some sense of what you might see and hear in the next few months.

If you believe conversations like this are important, we hope you’ll consider voting for How to Fix the Internet in the “General - Technology” category of the Signal Awards’ 3rd Annual Listener's Choice competition. Deadline for voting is Thursday, Oct. 17.

Vote now!

This episode was first published on March 21, 2023.

The promise of the internet was that it would be a tool to melt barriers and aid truth-seekers everywhere. But it feels like polarization has worsened in recent years, and more internet users are being misled into embracing conspiracies and cults.

Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

From QAnon to anti-vax screeds to talk of an Illuminati bunker beneath Denver International Airport, Alice Marwick has heard it all. She has spent years researching some dark corners of the online experience: the spread of conspiracy theories and disinformation. She says many people see conspiracy theories as participatory ways to be active in political and social systems from which they feel left out, building upon beliefs they already harbor to weave intricate and entirely false narratives.  

Marwick speaks with EFF’s Cindy Cohn and Jason Kelley about finding ways to identify and leverage people’s commonalities to stem this flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out.  

In this episode you’ll learn about:  

  • Why seemingly ludicrous conspiracy theories get so many views and followers  
  • How disinformation is tied to personal identity and feelings of marginalization and disenfranchisement 
  • When fact-checking does and doesn’t work  
  • Thinking about online privacy as a political and structural issue rather than something that can be solved by individual action 

Alice Marwick is director of research at Data & Society; previously, she was an Associate Professor in the Department of Communication and cofounder and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of Foreign Policy magazine’s 2017 Global Thinkers. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. She earned a political science and women's studies bachelor's degree from Wellesley College, a Master of Arts in communication from the University of Washington, and a PhD in media, culture and communication from New York University. 

Transcript

ALICE MARWICK
I show people these TikTok videos that are about these kind of outrageous conspiracy theories, like that the Large Hadron Collider at CERN is creating a multiverse. Or that there's, you know, this pyramid of tunnels under the Denver airport where they're trafficking children and people kinda laugh at them.

They're like, this is silly. And then I'm like, this has 3 million views. You know, this has more views than probably most of the major news stories that came out this week. It definitely has more views than any scientific paper or academic journal article I'll ever write, right? Like, this stuff has big reach, so it's important to understand it, even if it seems kind of frivolous or silly, or, you know, self-evident.

It's almost never self-evident. There's always some other reason behind it, because people don't do things arbitrarily. They do things that help them make sense of their lives. They give their lives meaning these are practices that people engage in because it means something to them. And so I feel like my job as a researcher is to figure out, what does this mean? Why are people doing this?

CINDY COHN
That’s Alice Marwick. The research she’s talking about is something that worries us about the online experience – the spread of conspiracy theories and misinformation. The promise of the internet was that it would be a tool that would melt barriers and aid truth-seekers everywhere. But sometimes it feels like polarization has worsened, and Internet users are misled into conspiracies and cults. Alice is trying to figure out why, how – and more importantly, how to fix it.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy.

This is our podcast series: How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to fix the internet. We're trying to make our digital lives better. EFF spends a lot of time warning about all the ways that things could go wrong and jumping into the fight when things do go wrong online, but what we'd like to do with this podcast is to give ourselves a vision of what the world looks like if we start to get it right.
JASON KELLEY
Our guest today is Alice Marwick. She’s a researcher at the Center for Information, Technology and Public Life at the University of North Carolina. She does qualitative research on a topic that affects everyone’s online lives but can be hard to grasp outside of anecdotal data – the spread of conspiracy theories and disinformation online.

This is a topic that many of us have a personal connection to – so we started off our conversation with Alice by asking what drew her into this area of research.

ALICE MARWICK
So like many other people I got interested in missing disinformation in the run up to the 2016 election. I was really interested in how ideas that had formerly been like a little bit subcultural and niche in far right circles were getting pushed into the mainstream and circulating really wildly and widely.

And in doing that research, it sort of helped me understand disinformation as a frame for understanding the way that information ties into marginalization, I think more broadly and disinformation is often a mechanism by which people who are marginalized the stories that the dominant culture tells about those marginalized people, the way that it circulates.

JASON KELLEY
I think it's been a primary focus for a lot of people in a lot of ways over the last few years. I know I have spent a lot of time on alternative social media platforms over the last few years because I find the topics kind of interesting to figure out what's happening there. And also because I have a friend who has kind of entered that space and, uh, I like to learn, you know, where the information that he's sharing with me comes from, essentially, right. But one thing that I've been thinking about with him and and with other folks is, is there something that happened to him that made him kind of easily radicalized, if you will? And I, I don't think that's a term that, that you recommend using, but I think a lot of people just assume that that's something that happens.

That there are people who, um, you know, grew up watching the X-files or something and ended up more able to fall into these misinformation and disinformation traps. And I'm wondering if that's, if that's actually true. It seems like from your research, it's not.

ALICE MARWICK
It's not, and that's because there's a lot of different things that bring people to disinformation, because disinformation is really deeply tied to identity in a lot of ways. There's lots of studies showing that more or less, every American believes in at least one conspiracy theory, but the conspiracy theory that you believe in is really based on who you are.

So in some cases it is about identity, but I think the biggest misconception about [00:04:00] disinformation is that the people who believe it are just completely gullible and that they don't have any critical thinking skills and that they go on YouTube and they watch a video or they listen to a podcast and all of a sudden their entire mindset shifts.

CINDY COHN
So why is radicalization not the right term? How do you think about this term and why you've rejected it?

ALICE MARWICK
The whole idea of radicalization is tied up in this countering violent extremism movement that is multinational, that is tied to this huge surveillance apparatus, to militarization, to, in many ways, like a very Islamophobic idea of the world. People have been researching why individuals commit political violence for 50 years and they haven't found any individual characteristics that make someone more susceptible to doing something violent, like committing a mass shooting or participating in the January 6th insurrection, for example. What instead that we see is that there's a lot of different puzzle pieces that can contribute to whether somebody takes on a set, an ideology, and whether they commit acts of violence and service of that  ideology.

And I think the thing that's frustrating to researchers is sometimes the same thing can have two completely different effects in people. So there's this great study of women in South America who were involved in guerilla warfare, and some of those women, when they had kids, they were like, oh, I'm not gonna do this anymore.

It's too dangerous. You know, I wanna focus on my family. But then there was another set of women that when they had kids, they felt they had more to lose and they had to really contribute to this effort because it was really important to the freedom of them and their children.

So when you think about radicalization, there's this real desire to have this very simplistic pathway that everybody kind of just walks along and they end up a terrorist. But that's just not the way the world works. 

The second reason I don't like radicalization is because white supremacy is baked into the United States from its inception. And white supremacist ideas and racist ideas are pretty foundational. And they're in all kinds of day-to-day language and media and thinking. And so why would we think it's radical to be, for example, anti-black or anti-trans when anti-blackness and anti-transness have like these really long histories?

CINDY COHN
Yeah,  I think that's right. And there is a way in which radicalization makes it sound as if, um, that's something other than our normal society. Iin many instances, that's not actually what's going on.

There's pieces of our society, the water we swim in every day that are getting, um, that are playing a big role in some of this stuff that ends up in a very violent place. And so by calling it radicalization, we're kind of creating an other that we're not a part of that I think will mean that we might miss some of the, some of the pieces of this.

ALICE MARWICK
Yeah, and I think that when we think about disinformation, the difference between a successful and an unsuccessful disinformation campaign is often whether or not the ideas exist in the culture already. One of the reasons QAnon, I think, has been so successful is that it picks up a lot of other pre circulating conspiracy theories.

It mixes them with anti-Semitism, it mixes them with homophobia and transphobia, and it kind of creates this hideous concoction, this like potion that people drink that reinforces a lot of their preexisting beliefs. It's not something that comes out of nowhere. It's something that's been successful precisely because it reinforces ideas that people already had.

CINDY COHN
I think the other thing that I saw in your research that might have been surprising or at least was a little surprising to me, is how participatory Q-Anon is.

You took a look at some of the Q-Anon. Conversations, you could see people pulling in pieces of knowledge from other things, you know, flight patterns and, and unexplained deaths and other things. It's something that they're co-creating, um, which I found fascinating.

ALIVE MARWICK
It's really similar to the dynamics of fandom in a lot of ways. You know, any of us who have ever participated in, like, a Usenet group or a subreddit about a particular TV show, know that people love putting theories together. They love working together to try to figure out what's going on. And obviously we see those same dynamics at play in a lot of different parts of internet culture.

So it's about taking the participatory dynamics of the internet and sort of mixing them with what we're calling conspiratorial literacy, which is sort of the ability to assemble these narratives from all these disparate places to kind of pull together, you know, photos and Wikipedia entries and definitions and flight paths and you know, news stories into these sort of n narratives that are really hard to make coherent sometimes, ‘cause they get really complicated.

But it's also about a form of political participation. I think there's a lot of people in communities where disinformation is rampant, where they feel like talking to people about Q-Anon or anti-vaxing or white supremacy is a way that they can have some kind of political efficacy. It's a way for them to participate, and sometimes I think people feel really disenfranchised in a lot of ways.

JASON KELLEY
I wonder because you mentioned internet culture, if some of this is actually new, right? I mean, we had satanic panics before and something I hear a lot of in various places is that things used to be so much simpler when we had four television channels and a few news anchors and all of them said the same thing, and you couldn't, supposedly, you couldn't find your way out into those other spaces. And I think you call this the myth of the epistemically consistent past. Um, and is that real? Was that a real time that actually existed? 

ALICE MARWICK
I mean, let's think about who that works for, right? If you're thinking about like 1970, let's say, and you're talking about a couple of major TV networks, no internet, you know, your main interpersonal communication is the telephone. Basically, what the mainstream media is putting forth is the narrative that people are getting.

And there's a very long history of critique of the mainstream media, of putting forth a narrative that's very state sponsored, that's very pro-capitalist, that writes out the histories of lots and lots of different types of people. And I think one of the best examples of this is thinking about the White Press and the Black Press.

And the Black Press existed because the White Press didn't cover stories that were of interest to the black community, or they strategically ignored those stories. Like the Tulsa Race massacre, for example, like that was completely erased from history because the white newspapers were not covering it.

So when we think about an. Epistemically consistent past, we're thinking about the people who that narrative worked for.

CINDY COHN
I really appreciate this point. To me, what was exciting about the internet and, you know, I'm a little older. I was alive during the seventies, um, and watched Walter Cronkite and, you know, this idea that, you know, old white guys in New York get, decide what the rest of us see, which is, that's who ran the networks, right.

That, that, you know, and maybe we had a little pbs, so we got a little Sesame Street too. 

But the promise of the Internet was that we could hear from more and more diverse voices, and reduce the power of those gatekeepers. What is scary is that some people are now pretty much saying that the answers to the problems of today’s Internet is to find four old white guys and let them decide what all the rest of us see again.    

ALICE MARWICK
I think it's really easy to blame the internet for the ills of society, and I, I guess I'm a digital critic, but I'm ultimately, I love the internet, like I love social media. I love the internet. I love online community. I love the possibilities that the internet has opened up for people. And when I look at the main amplifiers of disinformation, it's often politicians and political elites whose platforms are basically independent of the internet.

Like people are gonna cover, you know, leading politicians regardless of what media they're covering them with. And when you look at something like the lies around the Dominion voting machines, like, yes, those lies start in these really fringy internet communities, but they're picked up and amplified incredibly quickly by mainstream politicians.

And then they're covered by mainstream news. So who's at fault there? I think that blaming the internet really ignores the fact that there's a lot of other players here, including the government, you know, politicians, these big mainstream media sources. And it's really convenient to blame all social media or just the entire internet for some of these ills, but I don't think it's accurate.

CINDY COHN
Well, one of the things that I saw in your research and, and our friend, Yochai Benkler has done in a lot of things is the role of amplifiers, right? That these, these these places where people, you know, agree about things that aren't true and, and converse about things that aren't true. They predate the internet, maybe the internet gave a little juice to them, but what really gives juice to them is these amplifiers who, as I think you, you rightly point out, are some of the same people who were the mainstream media controllers in that hazy past of yore, um, I think that if this stuff never makes it to more popular amplifiers. I don't think it becomes the kind of thing that we worry about nearly so much.

ALICE MARWICK
Yeah, I mean, when I was looking at white supremacist disinformation in 2017,  someone I spoke with pointed out that the mainstream media is the best recruitment tool for white supremacists because historically it's been really hard for white supremacists to recruit. And I'm not talking about like historically, like in the thirties and forties, I'm talking about like in the eighties and nineties when they had sort of lost a lot of their mainstream political power.

It was very difficult to find like-minded people, especially if people were living in places that were a little bit more progressive or were multiracial. Most people, in reading a debunking story in the Times or the Post or whatever, about white supremacist ideas are going to disagree with those ideas.

But even if one in a thousand believes them and is like, oh wow, this is a person who's spreading white supremacist ideas, I can go to them and learn more about it. That is a far more powerful platform than anything that these fringe groups had. in the past, and one of the things that we've noticed in our research is that often conspiracy theories go mainstream precisely because they're being debunked by the mainstream media

CINDY COHN
Wow. So there's two kinds of amplifiers. There's the amplifiers who are trying to debunk things and accidentally perhaps amplify. But there are, there are people who are intentional amplifiers as well, and that both of them have the same effect, or at least both of them can spread the misinformation.

ALICE MARWICK
Yeah. I mean, of course, debunking has great intentions, right? We don't want horrific misinformation and disinformation to go and spread unchecked. But one of the things that we noticed when we were looking at news coverage of disinformation was that a lot of the times the debunking aspect was not as strong as we would've expected.

You know, you would expect a news story saying, this is not true, this is false, the presumptions are false. But instead, you'd often get these stories where they kind of repeated the narrative and then at the end there was, you know, this is incorrect. And the false narrative is often much more interesting and exciting than whatever the banal truth is.

So I think a lot of this has to do with the business model of journalism, right? There's a real need to comment on everything that comes across Twitter, just so that you can get some of the clicks for it. And that's been really detrimental, I think, to. journalists who have the time and the space to really research things and craft their pieces.

You know, it's an underpaid occupation. They're under a huge amount of economic and time pressure to like get stories out. A lot of them are working for these kind of like clickbaity farms that just churn out news stories on any hot topic of the day. And I think that is just as damaging and dangerous as some of these social media platforms.

JASON KELLEY
So when it comes to debunking, there's a sort of parallel, which is fact checking. And, you know, I have tried to fact check people, myself, um, individually. It doesn't seem to work. Does it work when it's, uh, kind of built into the platform as we've seen in different, um, in different spaces like Facebook or Twitter with community notes they're testing out now?

Or does that also kind of amplify it in some way because it just serves to upset, let's say, the people who have already decided to latch onto the thing that is supposedly being fact checked.

ALICE MARWICK
I think fact checking does work in some instances. If it's about things that people don't already have, like a deep emotional attachment to. I think sometimes also if it's coming from someone they trust, you know, like a relative or a close friend, I think there are instances in which it doesn't get emotional and people are like, oh, I was wrong about that, that's great. And then they move on. 

When it's something like Facebook where, you know, there's literally like a little popup saying, you know, this is untrue. Oftentimes what that does is it just reinforces this narrative that the social platforms are covering things up and that they're biased against certain groups of people because they're like, oh, Facebook only allows for one point of view.

You know, they censor everybody who doesn't believe X, Y, or Z. And the thing is that I think both liberals and conservatives believe that, obviously the narrative that social platforms censor conservatives is much stronger. But if you look at the empirical evidence, conservative stories perform much better on social media, specifically Facebook and Twitter, than do liberal stories.

So it, it's kind of like, it makes nobody happy. I don't think we should be amplifying, especially extremist views or views that are really dangerous. And I think that what you wanna do is get rid of the lowest hanging fruit. Like you don't wanna convert new people to these ideas like you, there might be some people who are already so enmeshed in some of these communities that it's gonna be hard for them to find their way out. But let's try to minimize the number of people who are exposed to it.

JASON KELLEY
That's interesting. It sounds like there are some models of fact checking that can help, but it really more applies to the type of information that's being, uh, fact checked than, than the specific way that the platform kind of sets it up. Is that what I'm hearing? Is that right?

ALICE MARWICK
Yeah, I mean, the problem is with a lot of, a lot of people online, I bet if you ask 99 people, if they consider themselves to be critical thinkers, 95 would say, yes, I'm a critical thinker. I'm a free thinker.

JASON KELLEY
A low estimate, I'm pretty sure.

ALICE MARWICK
A low estimate. So let's say you ask a hundred people in 99 say they're critical thinkers. Um, you know, I, I interview a lot of people about who have sort of what we might call unusual beliefs, and they all claim that they do fact checking and that they, when they hear something, they want to see if it's true.

And so they go and read other perspectives on it. And obviously, you know, they're gonna tell the researcher what they think I wanna hear. They're not gonna be like, oh, I saw this thing on Facebook and then I, like, spread it to 2000 people. And then it, you know, it turned out it was false. Um, but especially in the communities like Q-Anon, or anti-vaxxers, they already think of themselves as like researchers.

A lot of people who are into conspiracy theories think of themselves as researchers. That's one of their identities. And they spend quite a bit of time going down rabbit holes on the internet, looking things up and reading about it. And it's almost like a funhouse mirror held up to academic research because it is about the pleasure of learning, I think, and the joy of sort of educating yourself and these sort of like autodidactic processes where people can kind of learn just for the fun of learning. Um, but then they're doing it in a way that's somewhat divorced from what I would call sort of empirical standards of data collection or, you know, data assessment.

CINDY COHN
So, let's flip it around for a second. What does it look like if we are doing this right? What are the things that we would see in our society and in our conversations that would indicate that we're, we're kind of on the right path, or that we're, we're addressing this?

ALICE MARWICK
Well, I mean, the problem is this is a big problem. So it requires a lot of solutions. A lot of different things need to be worked on. You know, the number one thing I think would be toning down, you know, violent political rhetoric in general. 

Now how you do that, I'm not sure. I think it comes from, you know, there's this kind of window of discourse that's open that I think needs to be shut, where maybe we need to get back to slightly more civil levels of discourse. That's a really hard problem to solve. In terms of the internet, I think right now there's been a lot of focus on the biggest social media sites, and I think that what's happening is you have a lot of smaller social sites and it's much more difficult to play whack-a-Mole with a hundred different platforms than it is with three.

CINDY COHN
Given that we think that a pluralistic society is a good thing and we shouldn't all be having exactly the same beliefs all the time. How do we nurture that diversity without, you know, without the kind of violent edges? Or is it inevitable? Is there a way that we can nurture a pluralistic society that doesn't get to this us versus them, what team are you on kind of approach that I think underlies some of the spilling into violence that we've seen?

ALICE MARWICK
This is gonna sound naive, but I do think that there's a lot more commonalities between people than there are differences. So I interviewed a woman who's a conservative evangelical anti-vaxxer last week, and you. She and I don't have a lot in common in any way, but we had, like, a very nice conversation and one of the things that she told me is be she has this one particular interest that's brought her into conversation with a lot of really liberal people.

And so because she's interacted with a lot of them, she knows that they're not like demonic or evil. She knows they're just people and they have really different, they have really different opinions on a lot of really serious issues, but they're still able to sort of chat [00:32:00] about the things that they do care about.

And I think that if we can trace those lines of inclusion and connectivity between people, I think that's much, that's a much more positive, I think, area for growth than it is just constantly focusing on the differences. And that's easy for me to say as a white woman, right? Like it's much harder to deal with these differences if the difference in question is that the person thinks you're, you know, genetically inferior or that you shouldn't exist.

Those are things that are not easy. You can't just kumbaya your way out of those kinds of things. And in that case, I think we need to center the concerns of the most vulnerable and of the most marginalized, and make sure they're the ones whose voices are getting heard and their concerns are being amplified, which is not always the case, unfortunately.

JASON KELLEY
So let's say that we got to that point and um, you know, the internet space that you're on isn't as polarized, but it's pluralistic. Can you describe a little bit about what that feels like in your mind?

ALICE MARWICK
I think one thing to remember is that most people don't really care about politics. You know, a lot of us are kind of Twitter obsessed and we follow the news and we see our news alerts come up on our phone and we're like, Ooh, what just happened? Most people don't really care about that stuff. If you look at a site like Reddit, which gets a bad rap, but I think Reddit is just like a wonderful site for a lot of reasons.

It's mostly focused around interest-based communities, and the vast, vast majority of them are not about politics. They're about all kinds of other things. You know very mundane stuff. Like you have a dog or a cat, or you like the White Lotus and you wanna talk about the finale. Or you, you know, you live in a community and you want to talk about the fact that they're building a new McDonald's on like Route Six or whatever.

Yes, in those spaces you'll see people get into spats and you'll see people get into arguments and in those cases, there's usually some community moderation, but generally I think a lot of those communities are really healthy and positive. The moderators put forth like these are the norms.

And I think it's funny, I think some people would say Reddit uplifting, but I think you see the same thing in some Facebook groups as well, um, where you have people who really love, like quilting or I'm in dozens and dozens of Facebook groups on all kinds of weird things.

Like, “I found this weird thing at a thrift store,” or “I found this painting, you know, what can you tell me about it?” And I get such a kick out of seeing people from all these walks of life come together and talk about these various interests. And I do think that. You know, that's the utopian ideal of the internet that I think got us all so into it in the eighties and nineties.

This idea that you can come together with people and talk about things that you care about, even if you don't have anyone in your local immediate community who cares about those same things, and we've seen over and over that, that can be really empowering for people. You know, if you're an LGBTQ person in an area where there aren't that many other LGBTQ people, or if you're a black woman and you're the only black woman at your company, you know, you can get resources and support for that.

If you have an illness that isn't very well understood, you know, you can do community education on that. So, You know, these pockets of the internet, they exist and they're pretty big. And when we just constantly focus on this small minority of people who are on Twitter, you know, yelling at each other about stuff, I think it really overlooks the fact that so much of the internet is already this place of like enjoyment and, you know, hope.

CINDY COHN
Oh, I, that is so right and so good to be reminded of, um, that, that, that it's not that we have to fix the internet, it's that we have to grow the part of the internet  that never got broken. Right. That is fixed. 

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.


CINDY COHN
Now back to our conversation with Alice Marwick. In addition to all of her fascinating research on disinformation that we’ve been talking about so far, Alice has also been doing some work on another subject very near and dear to our hearts here at EFF – privacy.

Alice has a new book coming out in May 2023 called The Private is Political – so of course we couldn’t let her go without talking about that. 

ALICE MARWICK
I wanted to look at how you can't individually control privacy anymore because all of our privacy is networked because of social media and big data. We share information about each other, information about us as collected by all kinds of entities.

You know, you can configure your privacy settings till the cows come home, but it's not gonna change whether your photo gets swept up in, you know, some AI that then uses it for other kinds of purposes. And the second thing is to think about privacy as a political issue that has big impacts on everyone's lives, especially people who are marginalized in other areas.

I interviewed, oh, people from all kinds of places and spaces with all sorts of identities, and there's this really big misconception that people don't care about privacy. But people care very deeply about privacy and the way that they. Show that care  manifest in like so many different kinds of creative ways.

And so I'm hoping, I'm looking forward to sharing the stories of the people I spoke with.

CINDY COHN
That's great. Can you tell us one or I, I don't wanna spoil it, but -

ALICE MARWICK
Yeah, no. So I spoke with Jazz in North Carolina. These are all pseudonyms. And Jazz is an atheist, gender queer person, and they come from a pretty conservative Southern Baptist family and they're also homeless. They have a child who lives with their sister and they get a little bit of help from their family, like, not a lot, but enough that it can make the difference between whether they get by or not.

So what they did is they created two completely different sets of internet accounts. They have two Facebooks, two Twitters, two email addresses. Everything is different and it's completely firewalled. So on one, they use their preferred name and their pronouns. On the other, they use the pronouns they were assigned at birth and the name that their oarents gave them. And so the contrast between the two was just extreme. And so  Jazz said that they feel like their real, their Facebook page that really reflects them, that's their “me” page. That's where they can be who they really are because they have to kind of cover up who they are in so many other areas of their lives.

So they get this sort of big kick out of having this space on the internet where they can be like fiery and they can talk about politics and gender and things that they care about, but they have a lot to lose if the, if that, you know, seeps into their other life. So they have to be really cognizant of things like who does Facebook recommend that you friend, you know, who might see my other email address, who might do a Google search for my name?

And so I call this privacy work. It's the work that all of us do to maintain our privacy and we all do it. Um, and, but it's just much more intense for some kinds of people. Um, and so I see in jazz, you know, a lot of these themes, somebody who is. Suffering from intersectional forms of marginalization, but is still kind of doing the best they can.

And, you know, moving forward in the world, somebody who's being very creative with the internet, they're using it in ways that none of the designers or technologists ever intended, and they're helping it work for them, but they're also not served well by these technologies because they don't have the options to set the technologies up in ways that would fit their life or their needs.

Um, and so what I'm really calling for here is to, rather than thinking about privacy as individual, as something we each have to solve, as seeing it as a political and a structural problem that cannot be solved by individual responsibility or individual actions.

CINDY COHN
I so support that. That is certainly what we've experienced in the world as well, you know, the fight against the Real Names policy, say at Facebook, which, which really impacted, um, LGBTQ and trans community, especially because people are, they're changing their names, right? And that's important.

This real names policy, you know, first of all it's based on not good science. This idea that if you attach people's names to what they say, they will behave better. Which is, you know, belied by all of Facebook. Um, and, and, you know, it doesn't have any science behind it at all. But also these negative effects for, for, for people who, you know, for safety, you know, we work with a lot of domestic violence victims, you know, being able to separate out. One identity from another is tremendously important. And, and again, can, can matter for people's very lives. Or it could just be like, you know, when I'm Cindy at the dog park, I, I, I'm not interested in being, you know, Cindy, who's the ED of EFF, and being able to segment out your life and show up as, as different people, like, there's, there's a lot of power in that, even if it's not, you know, um, necessary to save your life.

ALICE MARWICK
Yeah, absolutely. Sort of that, that ability to maintain our social roles and to play different aspects of ourselves at different times. That's like a very human thing, and that's sort of fundamental to privacy. It's what parts of yourself do you wanna reveal at any given time. And when you have these huge sites like Facebook where they want a real name and they want you to have a persistent identity, it makes that really difficult.

Whereas sites like Reddit where you can have a pseudonym and you can have 12 accounts and nobody cares, and the site is totally designed to deal with that. You know, that works a lot better with how most people, I think, want to use the internet.

CINDY COHN
What other things do you think we can do? I mean, I'm assuming that we need some legal support here as well as technical, um, uh, support for, uh, more private internet, really More privacy protective internet.

ALICE MARWICK
I mean, we need comprehensive data privacy laws.

CINDY COHN
Yeah.

ALICE MARWICK
The fact that every different type of personal information is governed differently and some aren't governed at all. The fact that your email is not private, that, you know, anything you do through a third party is not private, whereas your video store records are private.

That makes no sense whatsoever. You know, it's just this complete amalgam. It doesn't have any underlying principle whatsoever. The other thing I would say is data brokers. We gotta get 'em out. We gotta get rid of them. You shouldn't be able to collect data in one for one purpose and then use it for God knows how many other purposes.

I think, you know, I was very happy under the Obama administration to see that the FTC was starting to look into data brokers. It seems like we lost a lot of that energy during the Trump administration, but you know, to me they're public enemy number one. Really don't like 'em.

CINDY COHN
We are with you.  And you know this isn’t new – as early as 1973 the federal government developed  something called the Fair Information Practice Principles that included recognizing that it wasn’t fair to collect data for one purpose and then use it for another without meaningful consent – but that’s the central proposition that underlies the data broker business model. I appreciate that your work confirms that those ideas are still good ones.  

ALICE MARWICK
Yeah, I think there's sort of a group of people doing critical privacy critical surveillance studies, um, a more diverse group of people than we've typically seen studying privacy. For a long time it was just sort of the domain of, you know, legal scholars and computer scientists. And so now that it's being sort of opened up to qualitative analysis and sociology and other forms, you know, I think we're starting to see a much more comprehensive understanding, which hopefully at some point will, you know, affect policy making and technology design as well.

CINDY COHN
Yeah, I sure hope so. I mean, I think we're in a time when our US Supreme Court is really not grappling with privacy harms and is effectively making it harder and harder to at least use the judicial remedies to try to address privacy harm. So, you know, this development of the rest of society and people's thinking about eventually, I think, will leak over into, into the judicial side.

But it's one of the things that a fixed internet would give us is the ability to have actual accountability for privacy harms at a level that much better than what we have now. And the other thing I hear you really developing out is that maybe the individual model, which is kind of inherent in a lot of litigation, isn't really the right model for thinking about how to remedy all of this either.

ALICE MARWICK
Well, a lot of it is just theatrical, right? It reminds me of, you know, security theater at the airport. Like the idea that by clicking through a 75-page, you know, terms of service change that's written at, you know, a level that would require a couple of years of law school, that it would take years if you spent, if you actually sat and read those, it would take up like two weeks of your life every year.

Like that is just preposterous. Like, nobody would sit and be like, okay, well here's a problem. What's the best way to solve it? It's just a loophole that allows companies to get away with all kinds of things that I think are, you know, unethical and immoral by saying, oh, well we told you about it.

But I think often what I hear from people is, well, if you don't like it, don't use it. And that's easy to say if you're talking about something that is, you know, an optional extra to your life. But when we're talking about the internet, there aren't other options. And I think what people forget is that the internet has replaced a lot of technologies that kind of withered away. You know, I've driven across country three times, and the first two times was kind of pre-mobile internet or a pre, you know, ubiquitous internet. And you had a giant road atlas in your car. Every gas station had maps and there were payphones everywhere. You know, now most payphones are gone, you go to a gas station, you ask for directions, they're gonna look at you blankly, and no one has a road atlas. You know, there are all these infrastructures that existed pre-internet that allowed us to exist without smartphones in the internet. And now most of those are gone. What are you supposed to do if you're in college and you're not using, you know, at the very least, your course management system, which is probably already, you know, collecting information on you and possibly selling it to a third party.

You can't pass your class. If you're not joining your study group, which might be on Facebook or any other medium, or WhatsApp or whatnot. Like, you can't communicate with people. It's absolutely ridiculous that we're just saying, oh, well, if you don't like it, don't use it. Like you don't tell people, you know.

If you're being targeted by like a murderous sociopath, oh, just don't go outside, right? Just stay inside all the time. That's just not, it's  terrible advice and it's not realistic.

CINDY COHN
No, I think that is true and certainly trying to find a job. I mean,  there are benefits to the fact that all of this stuff is networked, but it really does shine a light on the fact that, that this terms of service approach to things as if this is a contract, like a freely negotiated contract like I learned in law school with two equal parties, having a negotiation and coming to a meeting of the minds like this is, it's a whole other planet from that approach.

And to try to bring that frame to, you know, whether you enforce those terms or not, is, it's jarring to people. It's not how people live. And so it feels this way in which the legal system is kind of divorced from, from our lives. And, and if we get it right, the legal terms and the things that we are agreeing to will be things that we actually agree to, not things that are stuffed into a document that we never read or we really realistically can't read.

ALICE MARWICK
Yeah, I would love it if the terms of service was an actual contract and I could sit there and be like, all right, Facebook, if you want my business, this is what you have to do for me. And make some poor entry level employees sit there and go through all my ridiculous demands. Like, sure, you want it to be a contract, then I'm gonna be an equal participant.

CINDY COHN
You want those green m and ms in the green room?

ALICE MARWICK
Yeah, I want, I want different content moderation standards. I want a pony, I want glittery gifs on every page. You know, give it all to me.

CINDY COHN
Yeah. I mean, you know, there's a, there's a way in which a piece of the fed-averse strategy that I think, uh, we're kind of at the beginning of, uh, perhaps, uh, in this moment is, um, is that a little bit, you have a smaller community, you have people who run the servers, um, who you can actually interact with.

I mean, I don't know that, again, I don't know that there's ponies, but, um, but you know, one of the things that will help get us there is smaller, right? We can't do content moderation at scale. Um, and we can't do, you know, contractual negotiations at scale. So smaller might be helpful and I don't think it's gonna solve all the problems.

I'm, you know, but I think that there, there's a way in which you can at least get your arms around the problem. If you're dealing with a smaller community that then can inter, inter-operate with other communities, but isn't beholden to them with one rule to rule them all.

ALICE MARWICK
Yeah, I mean, I think the biggest problem right now is we need to get around usability and ux and these platforms need to be just as easy to use as like the easiest social platform. You know, it needs to be something that if you aren't, you know, if you don't have a college education, if you're not super techy, if you aren't familiar with, you know, if you're only familiar with very popular social media platforms, you still be, are able to use things like Mastodon.

I don't think we're quite there yet, but I can see a future in which we get there.

CINDY COHN
Well thank you so much for continuing to do this work.

ALICE MARWICK
Oh, thank you. Thank you, Cindy. Thank you, Jason. It was great to chat today.

JASON KELLEY
I'm so glad we got to talk to Alice. That was a really fun conversation and one that I think really underscored a point that I've noticed, um, which is that over the last, I don't know, many years we've seen Congress and other legislators try to tackle these two separate issues that we talked with Alice about.

One being sort of like content on the internet and the other being privacy on the internet. And when we spoke with her about privacy, it was clear that there are a lot. Obvious and simple and direct solutions to kind of informing how we can make privacy on the internet something that actually exists compared to content, which is a much stickier issue.

And, and it's, it's interesting that Congress and other legislators have consistently focused on one of these two topics, or let's say both of them at the expense of, of the one that actually is fairly direct when it comes to solutions. That really sticks out for me, but I'm, I'm wondering, I've blathered on, what do you find  most interesting about what we talked with her about? There was a lot there.

CINDY COHN
Well, I think that Alice does a great service to all of us by pointing out all the ways in which the kind of easy solutions that we reach to, especially around misinformation and disinformation and easy stories we tell ourselves are not easy at all and not empirically supported. So I think one of the things she does is just shine a light on the difference between the kind of stories we tell ourselves about how we could fix some of these problems and the actual empirical evidence about whether those things will work or not.

The other thing that I appreciated is she kind of pointed to spaces on the internet where things are kind of fixed. She talked about Reddit, she talked about some of the fan fiction places she talked about. Facebook groups and pointing out that, you know, sometimes we can be overly focused on politics and the darker pieces of the internet, and that these places that are supportive and loving and good communities that are doing the right thing, they already exist.

We don't have to create them, we just have to find a way to foster them, um, and build more of them. Make the, make more of the internet. That experience. But it, it's, it's refreshing to realize that, you know, Massive pieces of the internet were never broken, um, and don't need to be fixed.

JASON KELLEY
That is 100% right. We're sort of tilted, I think, to focus on the worst things, which is part of our job at EFF. But it's nice when someone says, you know, there are actually good things. And it reminds us that a lot of, in a lot of ways it's working and we can make it better by focusing on what's working.

Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member, donate, or look at hoodies, tshirts, hats and other merch, just in case you feel the need to represent your favorite podcast and your favorite digital rights organization.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time in two weeks

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.
MUSIC CREDIT ANNOUNCER
This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:
Probably Shouldn’t by J.Lang featuring Mr_Yesterday

CommonGround by airtone featuring: simonlittlefield

Additional beds and alternate theme remixes by Gaëtan Harris

Vote for EFF’s 'How to Fix the Internet’ Podcast in the Signal Awards!

We’re thrilled to announce that EFF’s “How to Fix the Internet” podcast is a finalist in the Signal Awards 3rd Annual Listener's Choice competition. Now we need your vote to put us over the top!

Vote now!

We’re barraged by dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future.

That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges.  

Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment.  

 That’s just a taste. If you haven’t checked us out before, listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. 

 And if you’ve liked what you’ve heard, please throw us a vote in the Signal Awards competition! 

Vote Now!

Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible. 

Electronic Frontier Foundation to Present Annual EFF Awards to Carolina Botero, Connecting Humanity, and 404 Media

2024 Awards Will Be Presented in a Live Ceremony Thursday, Sept. 12 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Carolina Botero, Connecting Humanity, and 404 Media will receive the 2024 EFF Awards for their vital work in ensuring that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

The EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 12, 2024 at the Golden Gate Club, 135 Fisher Loop in San Francisco’s Presidio. Guests can register at https://www.eff.org/event/eff-awards-2024. The ceremony will be livestreamed and recorded. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Maintaining internet access in a conflict zone, conducting fearless investigative reporting on how tech impacts our lives, and bringing the fight for digital rights and social justice to significant portions of Latin America are all ways of ensuring technology advances us all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners embody the internet’s highest ideals, building a better-connected and better-informed world that brings freedom, justice, and innovation for everyone. We hope that by recognizing them in this small way, we can shine a spotlight that helps them continue and even expand their important work.” 

Carolina Botero: Fostering Digital Human Rights in Latin America 

Carolina Botero is a researcher, lecturer, writer, and consultant who is among the foremost leaders in the fight for digital rights in Latin America. In more than a decade as executive director of the Colombia-based Karisma Foundation — founded in 2003 to ensure that digital technologies protect and advance fundamental human rights and promote social justice — she transformed the organization into an outspoken voice fostering freedom of expression, privacy, access to knowledge, justice, and self-determination in our digital world, with regional and international impact. She left that position this year, opening the door for a new generation while leaving a strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Botero holds a master’s degree in international law and cooperation from Belgium’s Vrije Universiteit Brussel and a master’s degree in commercial and contracting law from Spain’s Universitat Autònoma de Barcelona. She frequently authors op-eds for Colombia’s El Espectador and La Silla Vacía, and serves on the advisory board of The Regional Center for Studies for the Development of the Information Society (Cetic.br), monitoring the adoption of information and communication technologies in Brazil. She previously served on the board of Creative Commons and as a member of the UNESCO Advisory Committee on Open Science.  

Connecting Humanity: Championing Internet Access in Gaza 

Connecting Humanity is a Cairo-based nonprofit organization that helps Palestinians in Gaza regain access to the internet – a crucial avenue for free speech and the free press. Founded in late 2023 by Egyptian journalist, writer, podcaster, and activist Mirna El Helbawi, Connecting Humanity collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. Connecting Humanity has collected hundreds of thousands of eSims from around the world and distributed them to people in Gaza, providing a lifeline for many caught up in Israel’s war on Hamas. People in crisis zones rely upon the free flow of information to survive, and restoring internet access in places where other communications infrastructure has been destroyed helps with dissemination of life-saving information and distribution of humanitarian aid, ensures that everyone’s stories can be heard, and enables continued educational and cultural contact. El Helbawi previously worked as an editor at 7 Ayam Magazine and as a radio host at Egypt’s NRJ Group; she was shortlisted for the Arab Journalism Award in 2016, and she created the podcast Helbing. 

404 Media: Fearless Journalism 

As the media landscape in general and tech media in particular keeps shrinking, 404 Media — launched in August 2023 — has tirelessly forged ahead with incisive investigative reports, deep-dive features, blogs, and scoops about topics such as hacking, cybersecurity, cybercrime, sex, artificial intelligence, consumer rights, government and law enforcement surveillance, privacy, and the democratization of the internet. Co-founders Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg all worked together at Vice Media’s Motherboard, but after that site's parent company filed for bankruptcy in May 2023, the four journalists resolved to go out on their own and build what Maiberg has called "very much a website by humans, for humans about technology. It’s not about the business of technology — it’s about how it impacts real people in the real world.” Among many examples, 404 Media has uncovered a privacy issue in the New York subway system that let stalkers track peoples’ movements, causing the MTA to shut down the feature; investigated a platform being used to generate non-consensual pornography with AI, causing the platform to make changes limiting abuse; and reported on dangerously inaccurate AI-generated books that Amazon then removed from sale. 

 To register for this event: https://www.eff.org/event/eff-awards-2024 

For past honorees: https://www.eff.org/awards/past-winners 

 

Journalists Sue Massachusetts TV Corporation Over Bogus YouTube Takedown Demands

Posting Video Clips of Government Meetings Is Fair Use That Doesn’t Violate the DMCA, EFF’s Clients Argue

BOSTONA citizen journalists’ group represented by the Electronic Frontier Foundation (EFF) filed a federal lawsuit today against a Massachusetts community-access television company for falsely convincing YouTube to take down video clips of city government meetings.

The lawsuit was filed in the U.S. District Court for Massachusetts by Channel 781, an association of citizen journalists founded in 2021 to report on Waltham, MA, municipal affairs via its YouTube channel. The Waltham Community Access Corp.’s misrepresentation of copyright claims under the Digital Millennium Copyright Act (DMCA) led YouTube to temporarily deactivate Channel 781, making its work disappear from the internet last September just five days before an important municipal election, the suit says. 

“WCAC knew it had no right to stop people from using video recordings of public meetings, but asked YouTube to shut us down anyway,” Channel 781 cofounder Josh Kastorf said. “Democracy relies on an informed public, and there must be consequences for anyone who abuses the DMCA to silence journalists and cut off people’s access to government.” 

Channel 781 is a nonprofit, volunteer-run effort, and all of its content is available for free. Its posts include videos of its members reporting on news affecting the city, editorial statements, discussions in a talk-show format, and interviews. It also posts short video excerpts of meetings of the Waltham city council and other local government bodies. 

Waltham Community Access Corp. (WCAC) operates two cable television channels:  WCAC-TV is a Community Access station that provides programming geared towards the interests of local residents, businesses, and organizations, and MAC-TV is a Government Access station that provides coverage of municipal meetings, events, and special government-related programming. 

Some city meeting video clips that Channel 781 posted to YouTube were short excerpts from videos recorded by WCAC and first posted to WCAC’s website. Channel 781 posted them on YouTube to highlight newsworthy statements by city officials, to provoke discussion and debate, and to make the information more accessible to the public, including to people with disabilities. 

The DMCA notice and takedown process lets copyright holders ask websites to take down user-uploaded material that infringes their copyrights. Although Kastorf had explained to WCAC’s executive director that Channel 781’s use of the government meeting clips was a fair use under copyright law, WCAC sent three copyright infringement notices to YouTube referencing 15 specific Channel 781 videos, leading YouTube to deactivate the account and render all of its content inaccessible. YouTube didn’t restore access to the videos until two months later, after a lengthy intervention by EFF. 

The lawsuitwhich seeks damages and injunctive reliefsays WCAC knew, should have known, or failed to consider that the government meeting clips were a fair use of copyrighted material, and so it acted in bad faith when it sent the infringement notices to YouTube. 

“Nobody can use copyright to limit access to videos of public meetings, and those who make bogus claims in order to stifle critical reporting must be held accountable,” said EFF Intellectual Property Litigation Director Mitch Stoltz. “Phony copyright claims must never subvert the public’s right to know, and to report on, what government is doing.” 

For the complaint: https://www.eff.org/document/07-24-2024-channel-781-news-v-waltham-community-access-corporation-complaint

For more on the DMCA: https://www.eff.org/issues/dmca  

For EFF’s Takedown Hall of Shame: https://www.eff.org/takedowns

Contact: 
Mitch
Stoltz
IP Litigation Director

Podcast Episode: Fighting Enshittification

The early internet had a lot of “technological self-determination" — you could opt out of things, protect your privacy, control your experience. The problem was that it took a fair amount of technical skill to exercise that self-determination. But what if it didn’t? What if the benefits of online privacy, security, interoperability, and free speech were more evenly distributed among all internet users?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that award-winning author and EFF Special Advisor Cory Doctorow wants us to fight for. His term “enshittification” — a downward spiral in which online platforms trap users and business customers alike, treating them more and more like commodities while providing less and less value — was selected by the American Dialect Society as its 2023 Word of the Year. But, he tells EFF’s Cindy Cohn and Jason Kelley, enshittification analysis also identifies the forces that used to make companies treat us better, helping us find ways to break the cycle and climb toward a better future. 

In this episode you’ll learn about: 

  • Why “intellectual property” is a misnomer, and how the law has been abused to eliminate protections for society 
  • How the tech sector’s consolidation into a single lobbying voice helped bulldoze the measures that used to check companies’ worst impulses 
  • Why recent antitrust actions provide a glimmer of hope that megacompanies can still be forced to do better for users 
  • Why tech workers’ labor rights are important to the fight for a better internet 
  • How legislative and legal losses can still be opportunities for future change 

Cory Doctorow is an award-winning science fiction author, activist, journalist and blogger, and a Special Advisor to EFF. He is the editor of Pluralistic and the author of novels including “The Bezzle” (2024), “The Lost Cause” (2023), “Attack Surface” (2020), and “Walkaway” (2017); young adult novels including “Homeland” (2013) and “Little Brother” (2008); and nonfiction books including “The Internet Con: How to Seize the Means of Computation” (2023) and “How to Destroy Surveillance Capitalism” (2021). He is EFF's former European director and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

CORY DOCTOROW
So interop, you know, it's the idea that you don't need to buy your washing machine from the same people who sold you your clothes. You can use anyone's washing soap in that washing machine. Your dishes go in, in any dishwasher. Anyone's gas or electricity go into your car, you can bring your luggage onto any airline.
You know, there's just this kind of general presumption that things work together and sometimes that's just a kind of happy accident or a convergence where, you know, the airlines basically all said, okay, if it's bigger than seventy-two centimeters, we're probably gonna charge you an extra fee. And the luggage makers all made their luggage smaller than seventy-two centimeters, or you know, what a carry-on constitutes or whatever. Sometimes it's very formal, right? Sometimes like you go to a standards body and you're like, this is the threading gauge and size of a standard light bulb. And that means that every light bulb that you buy is gonna fit into every light bulb socket.
And you don't have to like read the fine print on the light bulb to find out if you've bought a compatible light bulb. And, sometimes it's adversarial. Sometimes the manufacturer doesn't want you to do it, right? Like, so HP wants you to spend something like $10,000 a gallon on printer ink and most of us don't want to spend $10,000 a gallon on printer ink and so out there are some people who figured out how HP printers ask a cartridge, ‘Hey, are you a cartridge that came from HP?’.
And they figured out how to get cartridges that aren't made by HP to say ‘Why yes, I am. And you know, it's not like the person buying the cartridge is confused about this. They are specifically like typing into a search engine, ‘How do I avoid paying HP $10,000 a gallon?’

CINDY COHN
That's Cory Doctorow. He's talking about all the places in our lives where, whether we call it that or not, we get to enjoy the power of interoperability.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
We spend a lot of time here at EFF warning about the things that could go wrong online -- and then of course jumping into the fray when they do go wrong. But on this show we're trying to envision what the world looks like if we start to get things right.

JASON KELLEY
Our guest today is Cory Doctorow. He is one of the world’s leading public thinkers about the digital world, as well as an author and activist. He writes both fiction and non fiction that has more ideas per page than anyone else we know.

CINDY COHN
We’re lucky enough that he’s been one of our colleagues at EFF for over 20 years and he’s one of my dearest friends. We had Cory on the podcast during our first season. I think he was our very first guest - but we thought it was time to check in again. And that’s not only because he’s so much fun to talk to, but also because the central idea he has championed for addressing the problems of platform monopolies – an idea called interoperability which we also call competitive compatibility – it’s started to get real traction in policy spaces both in the US and in Europe.
I quote Cory a lot on this show, like the idea that we don't want to go back to the good old days. We're trying to create the good new days. So I thought that it was a good place to start. What do the good new days look like in the Coryverse?

CORY DOCTOROW
So the old good internet was characterized by a very high degree of what I call like technological self-determination. Just the right to just decide how the digital tools you use work.
The problem was that it also required a high degree of technical skill. There are exceptions right. I think ad blockers are kind of our canonical exception for, you know, describing what a low-skill, high-impact element of technological self-determination is. Like more than half of all web users now run ad blockers. Doc Searls calls it the largest consumer boycott in human history.
And you don't have to be a brain surgeon or a hacker to install an ad blocker. It's just like a couple of clicks and away you go. And I think that a new good internet is one in which the benefits of technological self-determination, all the things you get beyond an ad blocker, like, you know, I'm speaking to you from a household that's running a pie hole, which is like a specialized data appliance that actually blocks ads in other things like smart TVs and apps and whatever.
I have a personal VPN that I run off my home network so that when I'm roaming - I just got back from Germany and they were blocking the port that I used for my mail server, and I could VPN into my house and get my email as though I were connected via my home - all of those things should just accrue to you with the ease that you get from an ad blocker because we can harness markets and tinkerers and cooperatives and people who aren't just making a thing to scratch their own itch, but are actually really invested in other people who aren't technically sophisticated being able to avail themselves of these tools too. That's the new good internet

CINDY COHN
I love that. I mean, you know, what is it? The future is here. It's just not evenly distributed. You just want to evenly distribute the future, and also make it simpler for folks to use.

CORY DOCTOROW
Yeah. You know, the problem of the old good internet was not the part where skilled technical practitioners didn't have to put up with nonsense from companies that didn't have their best interests at heart. Right?
The problem was that not everybody got that. Well, the good future of the internet is one in which we more evenly distribute those benefits. The bad future of the internet we're living in now is the one in which it's harder and harder, even for skilled practitioners, to enjoy those benefits.

CINDY COHN
And harder for the rest of us to get them, right? I hear two things, both as an end user, my world's gonna have a lot more choices, but good choices about things I can do to protect myself and places I can look for that help. And then as somebody who's a hacker or an innovator, you're gonna have a lot easier way to take your good idea, turn it into something and make it actually work, and then let people find it.

CORY DOCTOROW
And I think it's even more than that, right? Because I think that there's also the kind of incentives effect. You know, I'm not the world's biggest fan of markets as the best way to allocate all of our resources and solve all of our problems. But one thing that people who really believe in markets like to remind us of is that incentives matter.
And there is a kind of equilibrium in the product planning meeting where someone is always saying, ‘If we make it this bad, will someone type into a search engine, ‘How do I unrig this game?’ Because once they do that, then all bets are off, right? Think about again, back to ad blockers, right? If, if someone in the boardroom says, Hey, I've calculated that if we make these ads 20% more invasive we’ll increase our revenue per user by 2%.
Someone else who doesn't care about users necessarily, might say, yeah, but we think 20% of users will type ‘How do I block ads’ into a search engine as a result of this. And the expected revenue from that user doesn't just stay static at what we've got now instead of rising by 2%. The expected revenue from that user falls to zero forever.
We'll never make an advertising dime off of that user once they type ‘How do I block ads’ into a search engine. And so it isn't necessary even that the tools defend you. The fact that the tools might defend you changes the equilibrium, changes the incentives, changes the conduct of firms. And where it fails to do that, it then affords you a remedy.
So it's both belt and suspenders. Plan A and plan B.

JASON KELLEY
It sounds like we're veering happily towards some of the things that you've talked about lately with the term that you coined last year about the current moment in our digital world: Enshittification. I listened to your McLuhan lecture and it brought up a lot of similar points to what you're talking about now. Can you talk about this term? In brief, what does it mean, and, you know, why did the American Dialect Society call it the word of the year?

CORY DOCTOROW
Right. So I mean, the top level version of this is just that tech has these unique, distinctive technical characteristics that allow businesses to harm their stakeholders in ways that are distinct from the ways that other companies can just because like digital has got this flexibility and this fluidity.
And so it sets up this pattern that as the regulation of tech and as the competition for tech and as the force that workers provided as a check on tech's worst, worst impulses have all failed, we've got this dynamic where everything we use as a platform, and every platform is decaying in the same way, where they're shifting value first to users, to trap users inside a walled garden, and then bringing in business customers with the promise of funneling value from those users to those business customers, trapping those business customers, and then once everybody is held hostage, using that flexibility of digital tools to take that value away without releasing the users.
So even though the service is getting worse and worse for you, and it's less and less valuable to you, you still find yourself unable to leave. And you are even being actively harmed by it as the company makes it worse and worse.
And eventually it reaches a breaking point. Eventually things are so bad that we leave. But the problem is that that's like a catastrophic ending. That's the ending that, you know, everybody who loved LiveJournal had. Where they loved LiveJournal and the community really mattered to them.
And eventually they all left, but they didn't all end up in the same place. The community was shattered.
They just ended up fragmented and you can still hear people for whom LiveJournal was really important, saying like, I never got that back. I lost something that mattered to me. And so for me, the Enshittification analysis isn't just about like how do we stop companies from being bad, but it's about how we allow people who are trapped by bad companies to escape without having to give up as much as they have to give up now.

CINDY COHN
Right, and that leads right into adversarial interoperability, which is a term that I think was coined by Seth Schoen, EFF’s original staff technologist. It's an idea that you have really thought about a lot Cory and developed out. We heard you talk at the beginning of the episode, with that example about HP printers.

CORY DOCTOROW
That adversarial interoperability, it's been in our technology story for as long as we've had digital tools, because digital tools have this flexibility we've alluded to already. You know, the only kind of digital computer we can make is the Turing complete von Neumann machine.
It runs every program that's valid and that means that, you know, whenever a manufacturer has added an anti-feature or done something else abusive to their customers, someone else has been able to unlock it.
You know, when IBM was selling mainframes on the cheap and then charging a lot of money for printers and you know, keyboards and whatever, there were these things called plug compatible peripherals.
So, you know these companies they call the Seven Dwarfs, Fujitsu and all these other tech companies that we now think of as giants, they were just cloning IBM peripherals. When Apple wanted to find a way for its users to have a really good experience using Microsoft Office, which Microsoft had very steadfastly refused them and had, uh, made just this unbelievably terrible piece of software called, uh, office for the Mac that just didn't work and had all these compatibility problems, Steve Jobs just had his technologist reverse engineer Office, and they made iWork pages numbers in Keynote.
And it can read and write all the files from Excel, PowerPoint and Word. So this has always been in our story and it has always acted as a hedge on the worst impulses of tech companies.
And where it failed to act as a hedge, it created an escape valve for people who are trapped in those bad impulses. And as tech has become more concentrated, which itself is the result of a policy choice not to enforce antitrust law, which allowed companies to gobble each other up, become very, very concentrated.
It became easier for them to speak with one voice in legislative outlets. You know, when Seth coined the term adversarial interoperability, it was about this conspiracy among the giant entertainment companies to make it illegal to build a computer that they hadn't approved of called the Broadcast Flag.
And the reason the entertainment companies were able to foist this conspiracy on the tech industry, which was even then, between one and two orders of magnitude larger than the entertainment companies, is that the entertainment companies were like seven firms and they spoke with one voice and tech was a rabble.
It was hundreds of companies. We were in those meetings for the broadcast protection discussion group where you saw hundreds of companies at each other's throats not able to speak with one voice. Today, tech speaks with one voice, and they have taken those self-help measures, that adversarial interoperability, that once checked their worst impulses, and they have removed them from us.
And so we get what Jay Freeman calls felony contempt of business model where, you know, the act of reverse engineering a printer cartridge or an office suite or mobile operating system gives rise to both civil and criminal penalties and that means no one invests in it. People who do it take enormous personal risks. There isn't the kind of support chain.
You definitely don't get that kind of thing where it's like, ‘just click this button to install this thing that makes your experience better.’ To the extent that it even exists, it's like, download this mysterious software from the internet. Maybe compile it yourself, then figure out how to get it onto your device.
No one's selling you a dongle in the checkout line at Walmart for 99 cents that just jailbreaks your phone. Instead, it's like becoming initiated into the Masons or something to figure out how to jailbreak your phone.

CINDY COHN
Yes, we managed to free jailbreaking directly through the exceptions process in the DMCA but it hasn’t ended up really helping many people. We got an exception to one part of the law but the very next section prevents most people from getting any real benefit.

CORY DOCTOROW
At the risk of like teaching granny to suck eggs, we know what the deficiency in the, in the exceptions process is, right? I literally just explained this to a fact checker at the Financial Times who's running my Enshittification speech, who's like you said that it's illegal to jailbreak phones, and yet I've just found this process where they made it legal to jailbreak phones and it's like, yeah, the process makes it legal for you to jailbreak your phone. It doesn't make it legal for anyone to give you a tool to jailbreak your phone or for you to ask anyone how that tool should work or compare notes with someone about how that, so you can like, gnaw your own jailbreaking tool out of a whole log in secret, right? Discover the, discover the defect in iOS yourself.
Figure out how to exploit it yourself. Write an alternative version of iOS yourself. And install it on your phone in the privacy of your own home. And provided you never tell anyone what you've done or how you did it, the law will permit you to do this and not send you to prison.
But give anyone any idea how you're doing it, especially in a commercial context where it's, you know, in the checkout aisle at the Walmart for 99 cents, off to prison with you. Five-hundred-thousand-dollar fine and a five-year prison sentence for a first offense for violating Section 12 0 1 of the DMCA in a commercial way. Right? So, yeah, we have these exceptions, but they're mostly ornamental.

CINDY COHN
Well, I mean, I think that that's the, you know, it's profoundly weird, right? This idea that you can do something yourself, but if you help somebody else do it, that's illegal. It's a very strange thing. Of course, EFF is not like the digital Millennium Copyright Act since 1998 when it was passed, or probably 1995 when they started talking about it. But it is a situation in which, you know, we've chipped away at the law, and this is a thing that you've written a lot about. These fights are long fights and we have to figure out how to be in them for the long run and how to claim victory when we get even a small victory. So, you know, maybe this is a situation in which us crowing about some small victories, has led people to be misled about the overarching story which is still one where we've got a lot of work to do.

CORY DOCTOROW
Yeah, and I think that, you know, the way to understand this is as not just the DMCA, but also all the other things that we just colloquially call IP Law that constitute this thing that Jay calls felony contempt of business model. You know, there's this old debate among our tribe that, you know, IP is the wrong term to use. It's not really property. It doesn't crisply articulate a set of policies. Are we talking about trademark and patent and copyright, or do we wanna throw in broadcast rights and database rights and you know, whatever, but I actually think that in a business context, IP means something very, very specific.
When an investor asks a founder, ‘What IP do you have? What they mean is what laws can you invoke that will allow you to exert control over the conduct of your competitors, your critics, and your customers?’ That's what they mean. And oftentimes, each IP law will have an escape valve, like the DMCA's triennial exemptions. But you can layer one in front of the other, in front of the other in order to create something where all of the exemptions are plugged. So, you know, copyright has these exceptions but then you add trademark where like Apple is doing things like engraving nearly invisible apple logos on the components inside of its phones, so that when they're refurbished in the far east and shipped back as parts for independent repair, they ask the customs agency in the US to seize the parts for tarnishment of their trademark because the parts are now of an unknown quality and they bear their logo, which means that it will degrade the public's opinion of the reliability of an Apple product. So, you know, copyright and patent don't stop them from doing this, but we still have this other layer of IP and if you line the layers up in the right way, and this is what smart corporate lawyers do - they know the right pattern to line these different protections up, such that all of the exceptions that we're supposed to provide a public interest, that were supposed to protect us as the users or protect society - each one of those is choked off by another layer.

CINDY COHN
I think that’s one of my biggest frustrations in fixing the internet. We get stuck fighting one fight at a time and just when we pass one roadblock we have to navigate another. In fact, one that we haven’t mentioned yet is contract law, with terms of service and clickwrap license agreements that block innovation and interoperability. It starts to feel more like a game, you know, can our intrepid coder navigate around all the legal hurdles and finally get to the win where they can give us back control over our devices and tools?

CORY DOCTOROW
I mean, this is one of the things that's exciting about the antitrust action that we're getting now, is that I think we're gonna see a lot of companies being bound by obligations whose legitimacy they don't acknowledge and which they are going to flout. And when they do, presuming that the enforcers remain committed to enforcing the law, we are going to have opportunities to say to them, ‘Hey, you're gonna need to enter into a settlement that is gonna restrict your future conduct. You're gonna have to spin off certain things. You're gonna have to allow certain kinds of interop or whatever’.
That we got these spaces opening up. And this is how I think about all of this and it is very game-like, right? We have these turns. We're taking turns, our adversaries are taking turns. And what we want is not just to win ground, but we want to win high ground. We want to win ground from which we have multiple courses of action that are harder to head off. And one of the useful things about the Enshittification analysis is it tries to identify the forces that made companies treat us good. I think sometimes the companies treated us well because the people who ran them were honorable. But also you have to ask how those honorable people resisted their shareholders’ demands to shift value from the firm to their users or the other direction. What forces let them win, you know, in that fight. And if we can identify what forces made companies treat technology users better on the old good internet, then we can try and build up those forces for a new good internet. So, you know, one of the things that I think really helped the old good internet was the paradox of the worker power of the tech worker because tech workers have always been well compensated. They've always had a lot of power to walk out of the job and go across the street and get another job with someone better. Tech Workers had all of this power, which meant that they didn't ever really like form unions. Like tech union density historically has been really low. They haven't had formal power, they've had individual power, and that meant that they typically enjoyed high wages and quite cushy working conditions a lot of the time, right? Like the tech campus with the gourmet chef and the playground and the gym and the sports thing and the bicycles and whatever. But at the same time, this allowed the people they worked for to appeal to a sense of mission among these people. And it was, these were these like non-commercial ethical normative demands on the workforce. And the appeals to those let bosses convince workers to work crazy hours. Right? You know, the extremely hardcore Elon Musk demand that you sleep under your desk, right? This is where it comes from, this sense of mission which meant, for the bosses, that there was this other paradox, which was that if you motivate your workers with a sense of mission, they will feel a sense of mission. And when you say, ‘Hey, this product that you fought really hard for, you have to make worse, right? You've, you know, missed your gallbladder surgery and your mom's funeral and your kid's little league championship to make this product. We want you to stick a bunch of gross ads in it,’ the people who did that job were like, no, I feel a sense of mission. I will quit and walk across the street and get another job somewhere better if this is what you demand of me. One of the constraints that's fallen away is this labor constraint. You know, when Google does a stock buyback and then lays off 12,000 workers within a few months, and the stock buyback would pay their wages for 27 years, like the workers who remain behind get the message that the answer to, no, I refuse to make this product worse is fine, turn in your badge and don't let the door hit you in the ass on the way out. And one of the things we've always had a trade in at EFF is tech workers who really cared about their users. Right? That's been the core of our membership. Those have been the whistleblowers we sometimes hear from. Those have been our clients sometimes. And we often say when companies have their users’ backs, then we have the company's back. If we were to decompose that more fully, I think we would often find that the company that has its users' back really has a critical mass of indispensable employees who have their users’ back, that within the balance of power in the company, it's tipping towards users. And so, you know, in this moment of unprecedented union formation, if not union density, this is an area where, you know, you and I, Cindy have written about this, where, where tech rights can be workers' rights, where bossware can cut against labor rights and interoperable tools that defeat bossware can improve workers’ agency within their workplace, which is good for them, but it's good for the people that they feel responsibility for, the users of the internet.

CINDY COHN
Yeah. I remember in the early days when I first joined EFF and Adobe had had the FBI arrest Dmitri Sklyarov at DefCon because he developed a piece of software that allowed people to copy and move their Adobe eBooks into other formats and platforms. Some of EFF’s leadership went to Adobe’s offices to talk to their leadership and see if we could get them to back off.
I remember being told about the scene because there were a bunch of hackers protesting outside the Adobe building, and they could see Adobe workers watching them from the windows of that building. We knew in that moment that we were winning, that Adobe was gonna back down because their internal conversations were, how come we're the guys who are sending the FBI after a hacker?
We had something similar happen with Apple more recently when Apple announced that it was going to do client side scanning. We knew from the tech workers that we were in contact with inside the company that breaking end-to-end encryption was something that most of the workers didn't approve of. We actually flew a plane over Apple’s headquarters at One Infinite Loop to draw attention to the issue. Now whether it was the plane or not, it wasn't long before Apple backed down because they felt the pressure from inside, as well as outside. I think the tech workers are feeling disempowered right now, and it's important to keep telling these stories and reminding them that they do have power because the first thing that a boss who wants to control you does, is make you think you're all alone and you don't have any power. I appreciate that in the world we’re envisioning where we start to get tech right, we're not just talking about users and what users get. We're talking about what workers and creators and hackers and innovators get, which is much more control and the ability to say no or to say yes to something better than the thing that the company has chosen. I'm interested in continuing to try to tell these stories and have these conversations.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Cory Doctorow. Cory is well known for his writing and speaking but what some people may not realize is that he is a capital A Activist. I work with him closely on the activism team here at EFF, and I have seen firsthand how sometimes his eyes go red and he will throw everything he has into a fight. So I wanted to get him to talk a bit more about the activism side of his work, and what fuels that.

CORY DOCTOROW
I tried to escape EFF at one point. I actually was like, God, you know, the writing and the activism, I can't do both. I'm just gonna do one. And so I went off to do the writing for a few years, and I got so pissed off with things going wrong in the world that I wasn't actively engaged in trying to fix that I just lost it. And I was like, I, whatever negative effects accrue due to overwork are far less significant to me, both like intellectually and kind of emotionally, than the negative effects I get from feeling hopeless, right, and helpless and sitting on the sidelines while things that are just untenable, go on. And, you know, Cindy said it before, it's a long game, right? The activism game. We are sowing the seeds of a crop that we may never reap. And I am willing to understand and believe and make my peace with the idea that some of the stuff that I'm doing will be victories after I've left the field, right, it'll be for people who haven't even graduated high school yet, let alone going to work for EFF or one of our allies.
And so when I see red, when I get really angry, when I don't know, you know, the the DRM in browsers at the W3C or the European Union trying for, mandatory copyright filters for online services, I think like this is a fight we may not win, but it's a fight that we must fight, right? The stakes are too high not to win it, and if we lose it this time around, we will lay the groundwork for a future victory. We will create the people who are angry that the policy came out this way, who, when some opportunity opens up in the future, because you know these fights that we fight, the side that we're on is the side of producing something good and stable and beneficial. And the thing that we're fighting against has massive downstream harms, whether that's mandatory copyright filters or client-side scanning or breaking end-to-end encryption, right? Like if we lose a breaking end-to-end encryption fight, what we have lost is the safety of millions of people in whatever country that rule has been enacted, and that means that in a way that is absolutely deplorable and that the architects of these policies should be ashamed of, some of those people are gonna come to the most terrible harms in the future. And the thing that we should be doing because we have lost the fight to stop those harms from occurring, is be ready to when those harms occur, to be able to step in and say, not just we told you so, but here's how we fix it. Here's the thing that we're going to do to turn this crisis into the opportunity to precipitate a change.

JASON KELLEY
Yeah, that's right. Something that has always pleased me is when we have a guest here on the podcast and we’ve had many, who have talked about the blue ribbon campaign. And it’s clear that, you know, we won that fight, but years and years ago, we put together this coalition of people, maybe unintentionally, that still are with us today. And it is nice to imagine that, with the wins and the losses, we gain bigger numbers as we lay that groundwork.

CINDY COHN
And I think there is something also fun about trying to build the better world, being the good guys. I think there is something powerful about that. The fights are long, they're hard. I always say that, you know, the good guys throw better parties. And so on the one hand it's, yes, it's the anger; your eyes see red, we have to stop this bad thing from happening. But the other thing is that the other people who are joining with you in the fight are really good people to hang out with. And so I guess I, I wanna make sure that we're talking about both sides of a kind of activist life because they're both important. And if it wasn't for the fun part - fun when you win - sometimes a little gallows humor when you don't, that's as important as the anger side because if you're gonna be in it for the long run you can't just run on, you know, red-eyed anger alone.

CORY DOCTOROW
You know, I have this great laptop from this company Framework. I promise you this goes somewhere that, uh, is a user serviceable laptop. So it comes with a screwdriver. Even someone who's really klutzy like me can fix their laptop. And, uh, I drop my laptops all the time - and the screws had started coming loose on the bottom, and they were like, ‘hey, this sounds like a thing that we didn't anticipate when we designed it. Why don't we ship you a free one and you ship us back the broken one, we can analyze it for future ones’. So, I just did this, I swapped out the bottom cover of my laptop at the weekend, which meant that I had a new sticker surface for my laptop. And I found a save.org ‘some things are not for sale’ sticker, which was, you know, this incredible campaign that we ran with our lost and beloved colleague Elliot and putting that sticker on felt so good. You know, it was just like, yeah, this is, this is like a, this is like a head on a pike for me. This is great.

CINDY COHN
And for those who may not have followed that, just at the beginning of Covid actually, there was an effort by private equity to buy the control of the .org domain, which of course means EFF.org, but it means every other nonprofit. And we marshaled a tremendous coalition of nonprofits and others to essentially, you know, make the deal not happen. And save.org for, you know, the.orgs. And as Cory mentioned, our dear friend Elliot who was our activism director at the time, that was his last campaign before he got sick. And, we did, we, we won. We saved.org. Now that fight continues. Uh, things are not all perfect in .org land, but we did head that one off and that included a very funky, nerdy protest in front of an ICANN meeting that, uh, that a bunch of people came to.

CORY DOCTOROW
Top level domains still a dumpster fire. In other words, in other news, water's still wet. You know, the thing about that campaign that was so great, is it was one where we didn't have a path to victory. We didn't have a legal leg to stand on. The organization was just like operating in its own kind of bubble where it was fully insulated from, you know, formally, at least on paper, insulated from public opinion, from stakeholder opinions. It just got to do whatever it wanted. And we just like kind of threw everything at it. We tried all kinds of different tactics and cumulatively they worked and there were weird things that came in at the end. Like Xavier Becerra, who is then the Attorney General of California going like, well, you're kind of, you're a California nonprofit. Like, I think maybe we're gonna wanna look at this.
And then all of a sudden everyone was just like, no, no, no, no, no. But you know, it wasn't like Becerra saved it, right? It was like we built up the political pressure that caused the Attorney General of California who's got a thing or two on his plate, to kind of get up on his hind legs and go, ‘Hey, wait a second. What's going on here?’
And there've been so many fights like that over the years. You know, this is, this is the broadcast treaty at the UN. I remember when we went, our then colleague, Fred von Lohmann was like, ‘I know how to litigate in the United States 'cause we have like constitutional rights in the United States. The UN is not going to let NGOs set the agenda or sue. You can't force them to give you time.’ You know, it's like you have all the cards stacked against you there but we killed the broadcast flag and we did it like by being digitally connected with activists all over the world that allowed us to exploit the flexibility of digital tools to have a fluid improvisational style that allowed us at each turn to improvise in the moment, new tactics that went around the roadblocks that were put in our way. And some of them were surreal, like our handouts were being stolen and hidden in the toilets. Uh, but you know, it was a very weird fight.
And we trounced the most powerful corporations in the world in a forum that was completely stacked against us. And you know, that's the activist joy here too, right? It's like you go into these fights with the odds stacked against you. You never know whether or not there is a lurking potential for a novel tactic that your adversary is completely undefended on, where you can score really big, hard-to-anticipate wins. And I think of this as being related to a theory of change that I often discuss when people ask me about optimism and pessimism.
Because I don't like optimism and pessimism. I think they're both a form of fatalism. That optimism and pessimism are both the idea that the outcome of events are unrelated to human conduct, right? Things will get worse or things will get better. You just sit on the sidelines. It's coming either way. The future is a streetcar on tracks and it's going where it's going.
But I think that hope is this idea that if you're like, trying to get somewhere and you don't know how to get there, you're trying to ascend a gradient towards a better future - if you ascend that gradient to the extent that you can see the path from where you are now, that you can attain a vantage point from which you can see new possible ways of going that were obscured from where you were before, that doing something changes the landscape, changes where you're situated and may reveal something else you can do.

CINDY COHN
Oh, that's such a lovely place to end. Thank you so much, Cory, for taking time to talk with us. We're gonna keep walking that path, and we're gonna keep looking for the little edges and corners and ways, you know, that we can continue to push forward the better internet because we all deserve it.

JASON KELLEY
Thanks, Cory. It's really nice to talk to you.

CORY DOCTOROW
Oh, it was my pleasure.

JASON KELLEY
You know, I get a chance to talk to Cory more often than most people, and I'm still just overjoyed when it gets to happen. What did you think of that conversation, Cindy?

CINDY COHN
What I really liked about it is that he really grounds, you know, what could be otherwise, a kind of wonky thing - adversarial interoperability or competitive compatibility - in a list of very concrete things that have happened in the past and not the far past, the fairly recent past. And so, you know, building a better future really means just bringing some of the tools to bear that we've already brought to bear in other situations, just to our new kind of platform Enshittification world. Um, and I think it makes it feel much more doable than something that might be, you know, a pie in the sky. And then we all go to Mars and everything gets better.

JASON KELLEY
Yeah. You know, he's really good at saying, here's how we can learn from what we actually got right in the past. And that's something people don't often do in this, in this field. It's often trying to learn from what we got wrong. And the part of the conversation that I loved was just hearing him talk about how he got back into doing the work. You know, he said he wanted to do writing or activism, because he was just doing too much, but in reality, he couldn't do just one of the two because he cares so much about what's going on. It reminded me when he was saying, sort of, what gets his eyes to turn red of when we were speaking with Gaye Gordon-Byrne, about right to repair and how she had been retired and just decided after getting pulled back in again and again just to go wholly committed to to fighting for the right to repair after, you know that quote from The Godfather about being continually pulled back in. This is Cory and, and people like him, I think, to a tee.

CINDY COHN
Yeah, I think so too. That reminded me of what, what she said. And of course I was on the other side of it. I was one of the people that Cory was pinging over and over again.

JASON KELLEY
So you pulled him back in.

CINDY COHN
Well, I think he pulled himself back in. I was just standing there. Um, but, but it is, it is fun to watch somebody feel their passion grow so much that they just have to get back into the fight. And I think Gay really told that same trajectory of how, you know, sometimes something just bugs you enough that you decide, look, I gotta figure out how to get into this fight and, and, and make things better.

JASON KELLEY
And hopefully people listening will have that same feeling. And I know that, you know, many of our supporters do already.
Thanks for joining us for this episode of How to Fix the Internet. If you have any feedback or suggestions, we would be happy to hear from you. Visit EFF. org slash podcast and click on listener feedback. And while you're there, maybe you could become an EFF member and maybe you could pick up some merch. We've got very good t-shirts. Or you can just peruse to see what's happening in digital rights this week and every week. This podcast is licensed Creative Commons attribution. 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode, you heard Xena's Kiss slash Madea's Kiss by M. Wick, Probably Shouldn't by J. Lang featuring Mr. Yesterday, Come Inside by Zepp Herm featuring Snowflake, and Drops of H2O the Filtered Water Treatment by J. Lang featuring Airtone. Our theme music is by Nat Keefe of Beatmower with Reed Mathis. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I hope you'll join us again. I'm Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Government Has Extremely Heavy Burden to Justify TikTok Ban, EFF Tells Appeals Court

New Law Subject to Strictest Scrutiny Because It Imposes Prior Restraint, Directly Restricts Free Speech, and Singles Out One Platform for Prohibition, Brief Argues

SAN FRANCISCO — The federal ban on TikTok must be put under the finest judicial microscope to determine its constitutionality, the Electronic Frontier Foundation (EFF) and others argued in a friend-of-the-court brief filed Wednesday to the U.S. Court of Appeals for the D.C. Circuit. 

The amicus brief says the Court must review the Protecting Americans from Foreign Adversary Controlled Applications Act — passed by Congress and signed by President Biden in April — with the most demanding legal scrutiny because it imposes a prior restraint that would make it impossible for users to speak, access information, and associate through TikTok. It also directly restricts protected speech and association, and deliberately singles out a particular medium for a blanket prohibition. This demanding First Amendment test must be used even when the government asserts national security concerns. 

The Court should see this law for what it is: “a sweeping ban on free expression that triggers the most exacting scrutiny under the First Amendment,” the brief argues, adding it will be extremely difficult for the government to justify this total ban. 

Joining EFF in this amicus brief are the Freedom of the Press Foundation, TechFreedom, Media Law Resource Center, Center for Democracy and Technology, First Amendment Coalition, and Freedom to Read Foundation. 

TikTok hosts a wide universe of expressive content from musical performances and comedy to politics and current events, the brief notes, and with more than 150 million users in the United States and 1.6 billion users worldwide, the platform hosts enormous national and international communities that most U.S. users cannot readily reach elsewhere. It plays an especially important and outsized role for minority communities seeking to foster solidarity online and to highlight issues vital to them. 

“The First Amendment protects not only TikTok’s US users, but TikTok itself, which posts its own content and makes editorial decisions about what user content to carry and how to curate it for each individual user,” the brief argues.  

Congress’s content-based justifications for the ban make it clear that the government is targeting TikTok because it finds speech that Americans receive from it to be harmful, and simply invoking national security without clearly demonstrating a threat doesn’t overcome the ban’s unconstitutionality, the brief argues. 

“Millions of Americans use TikTok every day to share and receive ideas, information, opinions, and entertainment from other users around the world lies, and that’s squarely within the protections of the First Amendment,” EFF Civil Liberties Director David Greene said. “By barring all speech on the platform before it can happen, the law effects the kind of prior restraint that the Supreme Court has rejected for the past century as unconstitutional in all but the rarest cases.” 

For the brief: https://www.eff.org/document/06-26-2024-eff-et-al-amicus-brief-tiktok-v-garland

For EFF’s stance on TikTok bans: https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban 

Contact: 
David
Greene
Civil Liberties Director

EFF Welcomes Tarah Wheeler to Its Board of Directors

Wheeler Brings Perspectives on Information Security and International Conflict to the Board of Directors

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce today that Tarah Wheeler — a social scientist studying international conflict, an author, and a poker player who is CEO of the cybersecurity compliance company Red Queen Dynamics — has joined EFF’s Board of Directors. 

Wheeler has served on EFF’s advisory board since June 2020. She is the Senior Fellow for Global Cyber Policy at Council on Foreign Relations and was elected to Life Membership at CFR in 2023. She is an inaugural contributing cybersecurity expert for the Washington Post, and a Foreign Policy contributor on cyber warfare. She is the author of the best-selling “Women In Tech: Take Your Career to The Next Level With Practical Advice And Inspiring Stories” (2016). 

“I am very excited to have Tarah bring her judgment, her technical expertise and her enthusiasm to EFF’s Board,” EFF Executive Director Cindy Cohn said. “She has supported us in many ways before now, including creating and hosting the ‘Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON,’ which will have its third year this summer. Now we get to have her in a governance role as well.” 

"I am deeply honored to join the Board of Directors at the Electronic Frontier Foundation,” Wheeler said. “EFF's mission to defend civil liberties in the digital world is more critical than ever, and I am humbled to be invited to serve in this work. EFF has been there for me and other information security researchers when we needed a champion the most. Together, we will continue to fight for the rights and freedoms that ensure a free and open internet for all." 

Wheeler has been a US/UK Fulbright Scholar in Cyber Security and Fulbright Visiting Scholar at the Centre for the Resolution of Intractable Conflict at the University of Oxford, the Brookings Institution’s contributing cybersecurity editor, a Cyber Project Fellow at the Belfer Center for Science and International Affairs at Harvard University‘s Kennedy School of Government, and an International Security Fellow at New America leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative. She has been Head of Offensive Security & Technical Data Privacy at Splunk & Senior Director of Engineering and Principal Security Advocate at Symantec Website Security. She has led projects at Microsoft Game Studios (Halo and Lips) and architected systems at encrypted mobile communications firm Silent Circle. She has two cashes and $4,722 in lifetime earnings in the World Series of Poker. 

Members of the Board of Directors ensure EFF’s sustainability by adopting sound, ethical, and legal governance and financial management policies so that the organization has adequate resources to advance its mission.  

Shari Steele — who had been on EFF’s Board since 2015 when she ceased being EFF’s Executive Director — has rotated off the Board. Gigi Sohn has been elected Chair of the Board. 

For the full roster of EFF’s Board of Directors: https://www.eff.org/about/board

Podcast Episode: AI in Kitopia

Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all. 

In this episode you’ll learn about: 

  • The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms. 
  • How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech. 
  • Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit. 
  • Why “watermarking” probably isn’t a solution to AI-generated disinformation. 

Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria. 

Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KIT WALSH
Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.

CINDY COHN
That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

JACOB HOFFMAN-ANDREWS
I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.

JASON KELLEY
We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?

CINDY COHN
So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.

JASON KELLEY
AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.

KIT WALSH
The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.

CINDY COHN
You're not making those up, Kit. These are real examples.

KIT WALSH
That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.

CINDY COHN
So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?

JACOB HOFFMAN-ANDREWS
Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.

CINDY COHN
So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?

JACOB HOFFMAN-ANDREWS
You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.

CINDY COHN
Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.

KIT WALSH
And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.

JASON KELLEY
It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?

KIT WALSH
Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.

CINDY COHN
Right, but that’s illegal.

JASON KELLEY
Yeah, that's the idea. Right. You want to get away from that also.

KIT WALSH
Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.

JASON KELLEY
You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.

KIT WALSH
So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.

CINDY COHN
Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.

KIT WALSH
No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.

CINDY COHN
And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.

JACOB HOFFMAN-ANDREWS
Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.

CINDY COHN
There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?

KIT WALSH
Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.

JASON KELLEY
And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?

JACOB HOFFMAN-ANDREWS
Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?

JASON KELLEY
I sadly have.

JACOB HOFFMAN-ANDREWS
Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.

JASON KELLEY
This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.

CINDY COHN
Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.

JACOB HOFFMAN-ANDREWS
Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.

CINDY COHN
Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?

JACOB HOFFMAN-ANDREWS
Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.

CINDY COHN
So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.

JACOB HOFFMAN-ANDREWS
Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.

CINDY COHN
Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?

KIT WALSH
So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.

CINDY COHN
Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.

JASON KELLEY
Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

 

Podcast Episode: AI on the Artist's Palette

Collaging, remixing, sampling—art always has been more than the sum of its parts, a synthesis of elements and ideas that produces something new and thought-provoking. Technology has enabled and advanced this enormously, letting us access and manipulate information and images in ways that would’ve been unimaginable just a few decades ago.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

For Nettrice Gaskins, this is an essential part of the African American experience: The ability to take whatever is at hand—from food to clothes to music to visual art—and combine it with life experience to adapt it into something new and original. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how she takes this approach in applying artificial intelligence to her own artwork, expanding the boundaries of Black artistic thought.  

In this episode you’ll learn about: 

  • Why making art with AI is about much more than just typing a prompt and hitting a button 
  • How hip-hop music and culture was an early example of technology changing the state of Black art 
  • Why the concept of fair use in intellectual property law is crucial to the artistic process 
  • How biases in machine learning training data can affect art 
  • Why new tools can never replace the mind of a live, experienced artist 

Dr. Nettrice R. Gaskins is a digital artist, academic, cultural critic, and advocate of STEAM (science, technology, engineering, arts, and math) fields whose work she explores "techno-vernacular creativity" and Afrofuturism. She teaches, writes, "fabs,” and makes art using algorithms and machine learning. She has taught multimedia, visual art, and computer science with high school students, and now is assistant director of the Lesley STEAM Learning Lab at Lesley University.  She was a 2021 Ford Global Fellow, serves as an advisory board member for the School of Literature, Media, and Communication at Georgia Tech, and is the author of “Techno-Vernacular Creativity and Innovation” (2021). She earned a BFA in Computer Graphics with honors from Pratt Institute in 1992; an MFA in Art and Technology from the School of the Art Institute of Chicago in 1994; and a doctorate in Digital Media from Georgia Tech in 2014.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

NETTRICE GASKINS
I just think we have a need to remix, to combine, and that's where a lot of our innovation comes from, our ability to take things that we have access to. And rather than see it as a deficit, I see it as an asset because it produces something beautiful a lot of the times. Something that is really done for functional reasons or for practical reasons, or utilitarian reasons is actually something very beautiful, or something that takes it beyond what it was initially intended to be.

CINDY COHN
That's Nettrice Gaskins. She’s a professor, a cultural critic and a digital artist who has been using algorithms and generative AI as a part of her artistic practice for years.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fray when they DO go wrong. But this show is about envisioning, and hopefully helping create, a better future.

JASON KELLEY
Our guest today is Nettrice Gaskins. She’s the assistant director of the Lesley STEAM learning lab at Lesley University and the author of Techno-Vernacular Creativity and Innovation. Her artwork has been featured by the Smithsonian, among many other institutions.

CINDY COHN
Nettrice has spoken about how her work creating art using generative AI prompts is directly related to remix culture and hip hop and collage. There’s a rich tradition of remixing to create new artworks that can be more than the sum of their parts, and – at least the way that Nettrice uses it – generative AI is another tool that can facilitate this kind of art. So we wanted to start the conversation there.

NETTRICE GASKINS
Even before hip hop, even the food we ate, um, poor people didn't have access to, you know, ham or certain things. So they used the intestines of a pig and then they created gumbo, because they had a little bit of this and a little bit of that and they found really creative and innovative ways to put it all together that is now seen as a thing to have, or have tried. So I think, you know, when you have around the world, not just in the United States, but even in places that are underserved or disenfranchised you have this, still, need to create, and to even innovate.

And I think a lot of the history of African Americans, for example, in the United States, they weren't permitted to have their own languages. But they found ways to embed it in language anyway. They found ways to embed it in the music.

So I think along the way, this idea of what we now know as remixing or sampling or collage has been there all along and this is just one other way.  I think that once you explain how generative AI works to people who are familiar with remixing and all this thing in the history, it clicks in many ways.
Because it starts to make sense that it is instead of, you know, 20 different magazines I can cut images out and make a collage with, now we're talking about thousands of different, pieces of information and data that can inform how an image is created and that it's a prediction and that we can create all these different predictions. It sounds a lot like what happens when we were looking at a bunch of ingredients in the house and realizing we had to make something from nothing and we made gumbo.

And that gumbo can take many different forms. There's a gumbo in this particular area of the country, then there's gumbo in this particular community, and they all have the same idea, but the output, the taste, the ingredients are different. And I think that when you place generative AI in that space, you're talking about a continuum. And that's kind of how I treat it when I'm working with gen AI.

CINDY COHN
I think that's so smart. And the piece of that that's important that's kind of inherent in the way you're talking about it, is that the person doing the mixing, right? The chef, right, is the one who who does the choices and who's the chef matters, right?

NETTRICE GASKINS
And also, you know, when they did collage, there's no attribution. So if you look at a Picasso work that's done collage, he didn't, you know, all the papers, newspapers that he took from, there's no list of what magazines those images came from, and you could have hundreds to 50 to four different references, and they created fair use kind of around stuff like that to protect, you know, works that are like, you know, collage or stuff from modern art.

And we're in a situation where those sources are now quadrupled, it's not even that, it's like, you know, how many times, as opposed to when we were just using paper, or photographs.

We can't look at it the same because the technology is not the same, however, some of the same ideas can apply. Anybody can do collage, but what makes collage stand out is the power of the image once it's all done. And in some cases people don't want to care about that, they just want to make collage. They don't care, they're a kid and they just want to make paper and put it together, make a greeting card and give it to mom.

Other people make some serious work, sometimes very detailed using collage, and that's just paper, we're not even talking about digital collage, or the ways we use Adobe Photoshop to layer images and create digital collages, and now Photoshop's considered to be an AI generator as well. SoI think that if we look in the whole continuum of modern art, and we look at this need to curate abstractions from things from life.

And, you know, Picasso was looking at African art, there's a way in which they abstracted that he pulled it into cubism, him and many other artists of his time. And then other artists looked at Picasso and then they took it to whatever level they took it to. But I think we don't see the continuum. We often just go by the tool or go by the process and not realize that this is really an extension of what we've done before. Which is how I view gen AI. And the way that I use it is oftentimes not just hitting a button or even just cutting and pasting. It is a real thoughtful process about ideas and iteration and a different type of collage.

CINDY COHN
I do think that this bridges over into, you know, an area where EFF does a lot of work, right, which is really making sure we have a robust Fair Use doctrine that doesn't get stuck in one technology, but really can grow because, you know we definitely had a problem with hip hop where the, kind of, over-copyright enforcement really, I think, put a damper on a lot of stuff that was going on early on.

I don't actually think it serves artists either, that we have to look elsewhere as a way to try to make sure that we're getting artists paid rather than trying to control each piece and make sure that there's a monetization scheme that's based upon the individual pieces. I don't know if you agree, but that's how I think about it.

NETTRICE GASKINS
Yeah, and I, you know, just like we can't look at collage traditionally and then look at gen AI as exactly the same. There's some principles and concepts around that I think they're very similar, but, you know, there's just more data. This is much more involved than just cutting and pasting on canvas board or whatever, that we're doing now.

You know, I grew up with hip hop, hip hop is 50 this year, I'm 53, so I was three, so hip hop is my whole life. You know, from the very beginning to, to now. And I've also had some education or some training in sampling. So I had a friend who was producing demos for, and I would sit there all night and watch him splice up, you know, different sounds. And eventually I learned how to do it myself. So I know the nature of that. I even spliced up sampled musics further to create new compositions with that.

And so I'm very much aware of that process and how it connects even from the visual arts side, which is mostly what I am as a visual artist, of being able to splice up and, and do all that. And I was doing that in 1992.

CINDY COHN
Nice.

NETTRICE GASKINS
I was trying to do it in 1987, when the first time I used Amiga and DePaint, I was trying to make collages then in addition to what I was doing in my visual arts classes outside of that. So I've always been interested in this idea, but if you look at the history of even the music, these were poor kids living in the Bronx. These were poor kids and they couldn't afford all the other things, the other kids who were well off, so they would go to the trash bins and take equipment and re-engineer it and come up with stuff that now DJs around the world are using. That people around the world are doing, but they didn't have, so they had to be innovative. They had to think outside the box. And they had to use – they weren't musicians. They didn't have access to instruments, but they did have access to was records. And they had access to, you know, discarded electronics and they were able to figure out a way to stretch out a rhythm so that people could dance to it.

They had the ability to layer sounds so that there was no gap between one album and the next, so they could continue that continuous play so that the party kept going. They found ways to do that. They didn't go to a store and buy anything that made that happen. They made it happen by tinkering and doing all kinds of things with the equipment that they had access to, which is from the garbage.

CINDY COHN
Yeah, absolutely. I mean, Grandmaster Flash and the creation of the crossfader and a lot of actual, kind of, old school hardware development, right, came out of that desire and that recognition that you could take these old records and cut them up, right? Pull the, pull the breaks and, and play them over and over again. And I just think that it's pulling on something very universal. Definitely based upon the fact that a lot of these kids didn't have access to formal instruments and formal training, but also just finding a way to make that music, make that party still go despite that, there's just something beautiful about that.

And I guess I'm, I'm hoping, you know, AI is quite a different context at this point, and certainly it takes a lot of money to build these models. But I'm kind of interested in whether you think we're headed towards a future where these foundational models or the generative AI models are ubiquitous and we'll start to see the kids of the future picking them up and building new things out of them.

NETTRICE GASKINS
I think they could do it now. I think that with the right situation where they could set up a training model and figure out what data they wanted to go into the model and then use that model and build it over time. I just think that it's the time and the space, just like the time and the space that people had to create hip hop, right?

The time and the space to get in a circle and perform together or get into a room and have a function or party. I think that it was the time. And I think that, we just need that moment in this space to be able to produce something else that's more culturally relevant than just something that's corporate.
And I think my experiences as an artist, as someone who grew up around hip-hop all my life, some of the people that I know personally are pioneers in that space of hip-hop. But also, I don't even stay in hip-hop. You know, I was talking about sashiko, man, that's a Japanese hand-stitching technique that I'm applying, remixing to. And for me to do that with Japanese people, you know, and then their first concern was that I didn't know enough about the sashiko to be going there. And then when I showed them what I knew, they were shocked. Like, when I go into, I go deep in. And so they were very like, Oh, okay. No, she knows.

Sashiko is a perfect example. If you don't know about sashiko embroidery and hand stitching, there were poor people and they wanted to stretch out the fabrics and the clothing for longer because they were poor. So they figure out ways to create these intricate stitching patterns that reinforced the fabric so that it would last longer because they were poor. And then they would do patches, like patchwork quilts and they it was both a quilting and embroidery technique for poor people, once again, using what they had.

When we think about gumbo, here's another situation of people who didn't have access to fancy clothing or fancy textiles, but found a way. And then the work that they did was beautiful. Aesthetically, it was utilitarian in terms of why they did it. But now we have this entire cultural art form that comes out of that, that's beautiful.

And I think that's kind of what has happened along the way. You know, we are, just like there are gatekeepers in the art world so the Picassos get in, but not necessarily. You know, I think about Romare Bearden, who did get into some of the museums and things. But most people, they know of Picasso, but they don't know about Romare Bearden who decided to use collage to represent black life.

But I also feel like, we talk about equity, and we talk about who gets in, who has the keys. Where the same thing occurs in generative AI. Or just AI in general, I don't know, the New York Times had an article recently listed all the AI pioneers and no women were involved, it was just men. And then so it was a Medium article, here were 13, 15 women you could have had in your list. Once again, we see it again, where people are saying who holds the keys. These are the people that hold the keys. And in some cases, it's based on what academic institution you're at.

So again, who holds the keys? Even in the women who are listed. MITs, and the Stanfords, and somewhere out there, there's an AI innovator who isn't in any of those institutions, but is doing some cool things within a certain niche, you know, so we don't hear those stories, but there's not even opening to explore that, that person who wrote and just included those men didn't even think about women, didn't even think about the other possibilities of who might be innovating in space.

And so we continue to have this year in and year out every time there's a new change in our landscape, we still have the same kinds of historical omissions that have been going on for many years.

JASON KELLEY
Could we lift up some of the work that you have, have been doing and talk about like the specific process or processes that you've used? How do you actually use this? 'Cause I think a lot of people probably that listen, just know that you can go to a website and type in a prompt and get an image, and they don't know about, like, training it, how you can do that yourself and how you've done it. So I'm wondering if you could talk a little bit about your specific process.

NETTRICE GASKINS
So, I think, you know, people were saying, especially maybe two years ago, that my color scheme was unusually advanced for just using Gen AI. Well, I took two semesters of mandatory color theory in college.

So I had color theory training long before this stuff popped up. I was a computer graphics major, but I still had to take those classes. And so, yeah, my sense of color theory and color science is going to be strong because I had to do that every day as a freshman. And so that will show up.

I've had to take drawing, I've had to take painting. And a lot of those concepts that I learned as an art student go into my prompts. So that's one part of it. I'm using colors. I know the compliment. I know the split compliments.

I know the interactions between two colors that came from training, from education, of being in the classroom with a teacher or professor, but also, like one of my favorite books is Cane by an author named Jean Toomer. He only wrote one book, but it's a series of short stories. I love it. It's so visual. The way he writes is so visual. So I started reinterpreting certain aspects of some of my favorite stories from that book.

And then I started interpreting some of those words and things and concepts and ideas in a way that I think the AI can understand, the generator can understand.

So another example would be Maya Angelou's Phenomenal Woman. There's this part of the poem that talks about oil wells and how, you know, one of the lines. So when I generated my interpretation of that part of the poem, the oil wells weren't there, so I just extended using, in the same generator, my frame and set oil wells and drew a box: In this area of my image, I want you to generate oil wells.

And then I post it and people have this reaction, right? And then I actually put the poem and said, this is Midjourney. It's reinterpretation is not just at the level of reinterpreting the image and how that image like I want to create like a Picasso.

I don't, I don't want my work to look like Picasso at all or anybody. I want my work to look like the Cubist movement mixed with the Fauvists mixed with the collages mixed with this, with … I want a new image to pop up. I want to see something brand new and that requires a lot of prompting, a lot of image prompting sometimes, a lot of different techniques.

And it's a trial and error kind of thing until you kind of find your way through. But that's a creative process. That's not hitting a button. That's not cutting and pasting or saying make this look like Picasso. That's something totally different.

JASON KELLEY
Let’s take a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Nettrice Gaskins.

The way Nettrice talks about her artistic process using generative AI makes me think of that old cliche about abstract art – you know, how people say 'my kid could paint that.' There's a misconception now with Gen AI that people assume you just pop in a few words and boom, you get a piece of art. Sometimes that’s true, but Nettrice's approach goes far beyond a simple prompt.

NETTRICE GASKINS
Well, I did a talk recently, and it may have been for the Philadelphia Museum of Art. I did a lecture and the Q& A, they said, could you just demo? What you do, you have some time. And I remember after I demoed, they said, Oh, that definitely isn't hitting a button. That is much more, now I feel like I should go in there.

And a lot of times people come away, They're feeling like, now I really want to get in there, And see what I can do. Cause it isn't. I was showing, you know, in what, 30 seconds to a minute, basically how I generate images, which is very different than, you know, what they might think. And that was just within Midjourney. Another reason why personally that I got into on the prompt side before it was image style transfer, it was deep style. It wasn't prompt based. So it was about applying a style to. an image. Now you can apply many styles to one image. But then it was like, apply a style to this photo. And I spent most of my time in generative AI doing that until 2021, with DALL-E and Midjourney.

So before that, there were no prompts, it was just images. But then a lot came from that. The Smithsonian show came from that earlier work. It was like right on the edge of DALL-E and all that stuff coming. But I feel like, you know, my approach even then was somehow I didn't see images that reflected me or reflected, um, the type of images I wanted to see.

So that really propelled me into going into generative AI from the image style, applying styles to, for example, there's something if you're in a computer graphics major or you do computer graphics development or CGI, you may know a lot of people would know something called subsurface scattering.
And subsurface scattering is an effect people apply to skin. It's kind of like a milk, it's called glow. It's very well known, you texture and model your, your person based on that. However, it dulls dark skin tones. And if you look at photography and all the years with film and all that stuff, we have all these examples of where things were calibrated a certain way, not quite for darker skin tones. Here we are again, this time with, but there's something called specular reflection or shine, but apparently when applied, it brings up and enhances darker skin tones. So I wondered if I could apply, using neural image style transfer or deep style, if I could apply that shine or subsurface scattering to my photographs and create portraits of darker skin tones that enhanced features.

Well that succeeded. It worked. And I was just using 18th century tapestries that had metallics in them. So they have gold or they, you know, they had that shine in it as the style applied.

CINDY COHN
Ah.

NETTRICE GASKINS
So one of those, I did a bunch of series of portraits called the gilded series. And around the time I was working on that and exploring that, um, Greg Tate, the cultural critic and writer, Greg Tate, passed away in 2021 and, um, I did a portrait. I applied my tapestry, the style, and it was a selfie he had taken of himself. So it wasn't like it was from a magazine or anything like that. And then I put it on social media and immediately his family and friends reached out.
So now it's a 25 foot mural in Brooklyn.

CINDY COHN
Wow.

JASON KELLEY
It's beautiful. I was looking at it earlier. We'll link to it.

CINDY COHN
Yeah, I’ve seen it too.

NETTRICE GASKINS
And that was not prompt based, that's just applying some ideas around specular reflection and it says from the Gilded Series on the placard. But that is generative AI. And that is remixing. Some of that is in Photoshop, and I Photoshopped, and some of that is three different outputs from the generator that were put together and combined in Photoshop to make that image.

And when it's nighttime, because it has metallics in there, there's a little bit of a shine to the images. When I see people tag me, if they're driving by in the car, you see that glow. I mean, you see that shine, and it, it does apply. And that came from this experimenting with an idea using generative AI.

CINDY COHN
So, and when people are thinking about AI right now, you know, we've really worked hard and EFF has been part of this, but others as well, is to put the threat of bias and bias kind of as something we also have to talk about because it's definitely been historically a problem with, uh, AI and machine learning systems, including not recognizing black skin.

And I'm wondering as somebody who's playing with this a lot, how do you think about the role bias plays and how to combat it. And I think your stories kind of do some of this too, but I'd love to hear how you think about combating bias. And I have a follow up question too, but I want to start with that.

NETTRICE GASKINS
Yeah, some of the presentations I've done, I did a Power of Difference for Bloomberg, was talking to the black community about generative AI. There was a paper I read a month or two ago, um, they did a study for all the main popular AI generators, like Stable Diffusion, Midjourney, DALL-E, maybe another, and they did an experiment to show bias, to show why this is important, and one of the, the prompt was portrait, a portrait of a lawyer. And they did it in all, and it was all men...

CINDY COHN
I was going to say it didn't look like me either. I bet.

NETTRICE GASKINS
I think it was DALL-E was more diverse. So all men, but it was like a black guy. It was like, you know, they were all, and then there was like a racially ambiguous guy. And, um, was it Midjourney, um, for Deep Dream Generator, it was just a black guy with a striped shirt.

But for Portrait of a Felon. Um, Midjourney had kind of a diverse, still all men, but for kind of more diverse, racially ambiguous men. But DALL-E produced three apes and a black man. And so my comment to the audience or to listeners is, we know that there's history in Jim Crow and before that about linking black men, black people to apes. Somehow that's in the, that was the only thing in the prompt portrait of a felon and there are three apes and a black man. How do apes play into "felon?" The connection isn't "felon," the connection is the black man, and then to the apes. That's sitting somewhere and it easily popped up.

And there’s been scary stuff that I've seen in Midjourney, for example. And I'm trying to do a blues musician and it gives me an ape with a guitar. So it's still, you know, and I said, so there's that, and it's still all men, right?

So then because I have a certain particular knowledge, I do know of a lawyer who was Constance Baker Motley. So I did a portrait of Constance Baker Motley, but you would have to know that. If I'm a student or someone, I don't know any lawyers and I do portrait of a lawyer for an assignment or portrait of whatever, who knows what might pop up and then how do I process that?

We see bias all the time. I could, because of who I am, and I know history, I know why the black man and the apes or animals popped up for "felon," but it still happened, and we still have this reality. And so to offset that one of the things is, has it needed, in order to offset some of that is artists or user intervention.
So we intervene by changing the image. Thumbs up, thumbs down. Or we can, in the prediction, say, this is wrong. This is not the right information. And eventually it trains the model not to do that. Or we can create a Constance Baker Motley, you know, of our own to offset that, but we would have to have that knowledge first.

And a lot of people don't have that knowledge first. I can think of a lawyer off the top, you know, that's a black woman that, you know, is different from what I got from the AI generators. But if that intervention right now is key, and then we gotta have more people who are looking at the data, who are looking at the data sources, and are also training the model, and more ways for people from diverse groups to train the model, or help train the model, so we get better results.

And that hasn't, that usually doesn't happen. These happen easily. And so that's kind of my answer to that.

CINDY COHN
One of the stories that I've heard you tell is about the, working with these dancers in Trinidad and training up a model of the Caribbean dancers. And I'm wondering if one of the ways you think about addressing bias is, I guess, same with your lawyer story, is like sticking other things into the model to try to give it a broader frame than it might otherwise have, or in the training data.

But I'm, I'm wondering if that's something you do a lot of, and, and I, I might ask you to tell that story about the dancers, because I thought it was cool.

NETTRICE GASKINS
That was the Mozilla Foundation sponsored project for many different artists and technologists to interrogate AI - Generative AI specifically, but AI in general. And so we did choose, 'cause two of my theme, it was a team of three women, me and two other women. One's a dancer, one's an architect, but we, those two women are from the Caribbean.

And so because during the lockdown there was no festival, there was no carnival, a lot of people, across those cultures were doing it on Zoom. So we're having Zoom parties. So we just had Zoom parties with the data we were collecting. We were explaining generative AI and what we were doing, how it worked to the Caribbean community.

CINDY COHN
Nice.

NETTRICE GASKINS
And then we would put the music on and dance, so we were getting footage from the people who are participating. And then using PoseNet and machine learning to produce an app that allows you to dance with yourself, mini dancer, or to dance with shapes and, or create color painting with movement that was colors with colors from Carnival.

And one of the members, Vernelle Noel, she was using GAN, Generative Adversarial Networks to produce costuming, um, that you might see, but in really futuristic ways, using GAN technology. So different ways we could do that. We explored that with the project.

CINDY COHN
One of the things that, again, I'm kind of feeding you stuff back from yourself because I found it really interesting as you're talking about, like, using these tools in a liberatory way for liberation, as opposed to surveillance and control. And I wondered if you have some thoughts about how best to do that, like what are the kinds of things you look for in a project to try to see whether it's really based in liberation or based in kind of surveillance and monitoring and control, because that's been a long time issue, especially for people from majority countries.

NETTRICE GASKINS
You know, we were very careful with the data from the Carnival project. We said after a particular set period of time, we would get rid of the data. We were only using it for this project for a certain period of time, and we have, you know, signed, everyone signed off on that, including the participants.
Kind of like IRB if you're an academic, and in some cases, and one, Vernelle, was an academic. So it was done through her university. So there was IRB involved, but, um, I think it was just an art. Uh, but we want to be careful with data. Like we wanted people to know we're going to collect this and then we're going to get rid of it once we, you know, do what we need to do.

And I think that's part of it, but also, you know, people have been doing stuff with surveillance technology for a good minute. Um, artists have been doing, um, statements using surveillance technology. Um, people have been making music. There's a lot of rap music and songs about surveillance. Being watched and you know, I did a in Second Life, I did a wall of eyes that follow you everywhere you go...

CINDY COHN
Oof.

NETTRICE GASKINS
...to curate the feeling of always being watched. And for people who don't know what that's like it created that feeling in them as avatars they were like why am I being watched and I'm like this is you at a, if you're black at a grocery store, if you go to Neiman Marcus, you know go to like a fancy department store. This might be what you feel like. I'm trying to simulate that in virtual 3D was a goal.

I'm not so much trying to simulate. I'm trying to, here's another experience. There are people who really get behind the idea that you're taking from other people's work. And that that is the danger. And some people are doing that. I don't want to say that that's not the case. There are people out there who don't have a visual vocabulary, but want to get in here. And they'll use another person's artwork or their name to play around with tools. They don't have an arts background. And so they are going to do that.

And then there are people like me who want to push the boundaries. And want to see what happens when you mix different tools and do different things. And they never, those people who say that you're taking other people's work, I say opt out. Do that. I still continue because a lot of the work that, there's been so lack of representation from artists like me in the spaces, even if you opt out, it doesn't change my process at all.

And that says a lot about gatekeepers, equity, you know, representation and galleries and museums and all that thing are in certain circles for digital artists like Deviant, you know, it just, it doesn't get at some of the real gray areas around this stuff.

CINDY COHN
I think there's something here about people learning as well, where, you know, young musicians start off and they want to play like Beethoven, right? But at some point you find your own, you need to find your own voice. And that, that, that to me is the, you know, obviously there are people who are just cheaters who are trying to pass themselves off as somebody else and that matters and that's important.

But there's also just this period of, I think, artistic growth, where you kind of start out trying to emulate somebody who you admire, and then through that process, you kind of figure out your own voice, which isn't going to be just the same.

NETTRICE GASKINS
And, you know, there was some backlash over a cover that I had done for a book. And then they went, when the publisher came back, they said, where are your sources? It was a 1949 photograph of my mother and her friends. It has no watermark. So we don't know who took the photo. And obviously, from 1949, it's almost in the public domain, it's like, right on the edge.

CINDY COHN
So close!

NETTRICE GASKINS
But none of those people live anymore. My mom passed in 2018. So I use that as a source. My mom, a picture of my mom from a photo album. Or something from, if it's a client, they pay for licensing of particular stock photos. In one case, I used three stock photos because we couldn't find a stock photo that represented the character of the book.

So I had to do like a Frankenstein of three to create that character. That's a collage. And then that was uploaded to the generator, after that, to go further.
So yeah, I think that, you know, when we get into the backlash, a lot of people think, this is all you're doing. And then when I open up the window and say, or open up the door and say, look at what I'm doing - Oh, that's not what she was doing at all!

That's because people don't have the education and they're hearing about it in certain circles, but they're not realizing that this is another creative process that's new and it's entering our world that people can reject or not.

Like, people will say digital photography is going to take our jobs. Really, the best photography comes from being in a darkroom. And going through the process with the enlarger and the chemicals. That's the true photography. Not what you do in these digital cameras and all that stuff and using software, that's not real photography. Same kind of idea but here we are talking about something else. But very, very similar reaction.

CINDY COHN
Yeah, I think people tend to want to cling to the thing that they're familiar with as the real thing, and a little slow sometimes to recognize what's going on. And what I really appreciate about your approach is you're really using this like a tool. It's a complicated process to get a really cool new paintbrush that people can create new things with.

And I want to make sure that we're not throwing out the babies with the bathwater as we're thinking about this. And I also think that, you know, my hope and my dream is that in our, in our better technological future, you know, these tools will be far more evenly distributed than say some of the earlier tools, right?
And you know, Second Life and, and things like that, you know, were fairly limited by who could have the financial ability to actually have access. But we have broadened that aperture a lot, not as far as it needs to go now. And so, you know, part of my dream for a better tech future is that these tools are not locked away and only people who have certain access and certain credentials get the ability to use them.

But really, we broaden them out. That, that points towards more open models, open foundational models, as well as, um, kind of a broader range of people being able to play with them because I think that's where the cool stuff's gonna probably come from. That's where the cool stuff has always come from, right?

It hasn't come from the mainstream corporate business model for art. It's come from all the little nooks and crannies where the light comes in.

NETTRICE GASKINS
Yeah. Absolutely.

CINDY COHN
Oh Nettrice, thank you so much for sharing your vision and your enthusiasm with us. This has just been an amazing conversation.

NETTRICE GASKINS
Thanks for having me.

JASON KELLEY
What an incredible conversation to have, in part because, you know, we got to talk to an actual artist about their process and learn that, well, I learned that I know nothing about how to use generative AI and that some people are really, really talented and it comes from that kind of experience, and being able to really build something, and not just write a sentence and see what happens, but have an intention and a, a dedicated process to making art.

And I think it's going to be really helpful for more people to see the kind of art that Nettrice makes and hear some of that description of how she does it.

CINDY COHN
Yeah. I think so too. And I think the thing that just shines clear is that you can have all the tools, but you need the artist. And if you don't have the artist with their knowledge and their eye and their vision, then you're not really creating art with this. You may be creating something, something you could use, but you know, there's just no replacing the artist, even with the fanciest of tools.

JASON KELLEY
I keep coming back to the term that, uh, was applied to me often when I was younger, which was “script kitty,” because I never learned how to program, but I was very good at finding some code and using it. And I think that a lot of people think that's the only thing that generative AI lets you do.

And it's clear that if you have the talent and the, and the resources and the experience, you can do way more. And that's what Nettrice can show people. I hope more people come away from this conversation thinking like, I have to jump onto this now because I'm really excited to do exactly the kinds of things that she's doing.

CINDY COHN
Yeah, you know, she made a piece of generative art every day for a year, right? I mean, first of all, she comes from an art background, but then, you know, you've got to really dive in, and I think that cool things can come out of it.

The other thing I really liked was her recognition that so much of our, our culture and our society and the things that we love about our world comes from, you know, people on the margins making do and making art with what they have.

And I love the image of gumbo as a thing that comes out of cultures that don't have access to the finest cuts of meat and seafood and instead build something else, and she paired that with an image of Sashiko stitching in Japan, which came out of people trying to think about how to make their clothes last longer and make them stronger. And this gorgeous art form came out of it.

And how we can think of today's tools, whether they're AI or, or others as another medium in which we can begin to make things a beauty or things that are useful out of, you know, maybe the dribs of drabs of something that was built for a corporate purpose.

JASON KELLEY
That's exactly right. And I also loved that. And I think we've discussed this before at EFF many times, but the comparison of the sort of generative AI tools to hip hop and to other forms of remix art, which I think probably a lot of people have made that connection, but I think it's, it's worth saying it again and again, because it is, it is such a, a sort of clear through line into those kinds of techniques and those kinds of art forms.

CINDY COHN
Yeah. And I think that, you know, from EFF's policy perspective, you know, one of the reasons that we stand up for fair use and think that it's so important is the recognition that arts like collage and like using generative AI, you know, they're not going to thrive if, if our model of how we control or monetize them is based on charging for every single little piece.

That's going to limit, just as it limited in hip hop, it's going to limit what kind of art we can get. And so that doesn't mean that we just shrug our shoulders and don't, you know, and say, forget it, artists, you're never going to be paid again.

JASON KELLEY
I guess we’re just never going to have hip hop or

CINDY COHN
Or the other side, which is we need to find a way, you know, we, we, there are lots of ways in which we compensate people for creation that aren't tied to individual control of individual artifacts. And, and I think in this age of AI, but in previous images as well, like the failure for us to look to those things and to embrace them, has real impacts for our culture and society.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.

In this episode, you heard Xena's Kiss slash Madea's Kiss by MWIC and Lost Track by Airtone featuring MWIC. You can find links to their music in our episode notes or on our website at EFF.org slash podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

Podcast Episode: Chronicling Online Communities

From Napster to YouTube, some of the most important and controversial uses of the internet have been about building community: connecting people all over the world who share similar interests, tastes, views, and concerns. Big corporations try to co-opt and control these communities, and politicians often promote scary narratives about technology’s dangerous influences, but users have pushed back against monopoly and rhetoric to find new ways to connect with each other.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Alex Winter is a leading documentarian of the evolution of internet communities. He joins EFF’s Cindy Cohn and Jason Kelley to discuss the harms of behavioral advertising, what algorithms can and can’t be blamed for, and promoting the kind of digital literacy that can bring about a better internet—and a better world—for all of us. 

In this episode you’ll learn about: 

  • Debunking the monopolistic myth that communicating and sharing data is theft. 
  • Demystifying artificial intelligence so that it’s no longer a “black box” impervious to improvement. 
  • Decentralizing and democratizing the internet so more, diverse people can push technology, online communities, and our world forward. 
  • Finding a nuanced balance between free speech and harm mitigation in social media. 
  • Breaking corporations’ addiction to advertising revenue derived from promoting disinformation. 

Alex Winter is a director, writer and actor who has worked across film, television and theater. Best known on screen for “Bill & Ted’s Excellent Adventure” (1989) and its sequels as well as “The Lost Boys” (1987), “Destroy All Neighbors” (2024) and other films, he has directed documentaries including “Downloaded” (2013) about the Napster revolution; “Deep Web” (2015) about the online black market Silk Road and the trial of its creator Ross Ulbricht; “Trust Machine” (2018) about the rise of bitcoin and the blockchain; and “The YouTube Effect” (2022). He also has directed critically acclaimed documentaries about musician Frank Zappa and about the Panama Papers, the biggest global corruption scandal in history and the journalists who worked in secret and at great risk to break the story.   

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

ALEX WINTER
I think that people keep trying to separate the Internet from any other social community or just society, period. And I think that's very dangerous because I think that it allows them to be complacent and to allow these companies to get more powerful and to have more control and they're disseminating all of our information. Like, that's where all of our news, all of how anyone understands what's going on on the planet. 

And I think that's the problem, is I don't think we can afford to separate those things. We have to understand that it's part of society and deal with making a better world, which means we have to make a better internet.

CINDY COHN
That’s Alex Winter. He’s a documentary filmmaker who is also a deep geek.  He’s made a series of films that chronicle the pressing issues in our digital age.  But you may also know him as William S. Preston, Esquire - aka Bill of the Bill and Ted movies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet. 

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. You know, at EFF we spend a lot of time pointing out the way things could go wrong – and then of course  jumping in to fight when they DO go wrong. But this show is about envisioning – and hopefully helping create – a better future.

JASON KELLEY
Our guest today, Alex Winter, is an actor and director and producer who has been working in show business for most of his life. But as Cindy mentioned, in the past decade or so he has become a sort of chronicler of our digital age with his documentary films. In 2013, Downloaded covered the rise and fall, and lasting impact, of Napster. 2015’s Deep Web – 

CINDY COHN
Where I was proud to be a talking head, by the way. 

JASON KELLEY
– is about the dark web and the trial of Ross Ulbricht who created the darknet market the Silk Road. And 2018’s Trust Machine was about blockchain and the evolution of cryptocurrency. And then most recently, The YouTube Effect looks at the history of the video site and its potentially dangerous but also beneficial impact on the world. That’s not to mention his documentaries on The Panama Papers and Frank Zappa. 

CINDY COHN
Like I said in the intro, looking back on the documentaries you’ve made over the past decade or so, I was struck with the thought that you’ve really become this chronicler of our digital age – you know, capturing some of the biggest online issues, or even shining a light a bit on some of the corners of the internet that people like me might live in, but others might not see so much. . Where does that impulse come from you?

ALEX WINTER
I think partly my age. I came up, obviously, before the digital revolution took root, and was doing a lot of work around the early days of CGI and had a lot of friends in that space. I got my first computer probably in ‘82 when I was in college, and got my first Mac in ‘83, got online by ‘84, dial-up era and was very taken with the nascent online communities at that time, the BBS and Usenet era. I was very active in those spaces. And I'm not at all a hacker, I was an artist and I was more invested in the spaces in that way, which a lot of artists were in the eighties and into the nineties, even before the web.

So I was just very taken with the birth of internet based communities and the fact that it was such a democratized space and I mean that, you know, literally – that it was such an interesting mix of people from around the world who felt free to speak about whatever topics they were interested in, there were these incredible people from around the world who were talking about politics and art and everything  in extremely a robust way.

But I also, um, It really seemed clear to me that this was the beginning of something, and so my interest from the doc side has always been charting the internet in terms of community, and what the impact of that community is on different things, either political or whatever. And that's why my first doc was about Napster, because, you know, fast forward to 1998, which for many people is ancient history, but for us was the future.

And you're still in a modem dial up era and you now have an online community that has over a hundred million people on it in real time around the world who could search each other's hard drives and communicate.  What made me, I think, want to make docs was Napster was the beginning of realizing this disparity between the media or the news or the public's perception of what the internet was and what my experience was.

Where Sean Fanning was kind of being tarred as this pirate and criminal. And while there were obviously ethical considerations with Napster in terms of the  distribution of music, that was not my experience. My experience was this incredibly robust community and that had extreme validity and significance in sort of human scale.

And that's, I think, what really prompted me to start telling stories in this space. I think if anyone's interested in doing anything, including what you all do there, it's because you feel like someone else isn't saying what you want to be said, right? And so you're like, well, I better say it because no one else is saying it. So I think that was the inspiration for me to spend more time in this space telling stories here.

CINDY COHN
That's great. I mean, I do, and the stuff I hear in this is that, you know, first of all, the internet kind of erased distance so you could talk to people all over the world from this device in your home or in one place. And that people were really building community. 

And I also hear this, in terms of Napster, this huge disconnect between the kind of business model view of music, and music fan’s views of music. One of the most amazing things for me was realizing that I could find somebody who had a couple of songs that I really liked and then look at everything else they liked. And it challenged this idea that only kind of professional music critics who have a platform can suggest music to you and opened up a world, like literally felt like something just like a dam broke, and it opened up a world to music. It sounds like that was your experience as well.

ALEX WINTER
It was, and I think that really aptly describes the, the almost addictive fascination that people had with Napster and the confusion, even retrospectively, that that addiction came from theft, from this desire to steal in large quantities. I mean obviously you had kids in college dorm rooms pulling down gigabytes of music but the pull, the attraction to Napster was exactly what you just said – like I would find friends in Japan and Africa and Eastern Europe who had some weird like Coltrane bootleg that I'd never heard and then I was like, oh, what else do they have? And then here's what I have, and I have a very eclectic music collection. 

Then you start talking about art then you start talking about politics because it was a very robust forum So everyone was talking to each other. So it really was community and I think that gets lost because the narrative wants to remain the narrative, in terms of gatekeepers, in terms of how capitalism works, and that power dynamic was so completely threatened by, by Napster that, you know, the wheels immediately cranked into gear to sort of create a narrative that was, if you use this, you're just a terrible human being. 

And of course what it created was the beginning of this kind of online rebellion where people before weren't probably, didn't think of themselves as technical, or even that interested in technology, were saying, well, I'm not this thing that you're saying I am, and now I'm really going to rebel against you. Now I'm really going to dive into this space. And I think that it actually created more people sort of entering online community and building online communities, because they didn't feel like they were understood or being adequately represented.

And that led all the way to the Arab Spring and Occupy, and so many other things that came up after that.

JASON KELLEY
The community's angle that you're talking about is probably really, I think, useful to our audience. Because I think they probably find themselves, I certainly find myself in a lot of the kinds of communities that you've covered. Which often makes me think, like, how is this guy inside my head?

How do you think about the sort of communities that you need to, or want to chronicle. I know you mentioned this disconnect between the way the media covers it and the actual community. But like, I'm wondering, what do you see now? Are there communities that you've missed the boat on covering?

Or things that you want to cover at this moment that just aren't getting the attention that you think they should?

ALEX WINTER
I honestly just follow the things that interest me the most. I don't particularly … look, because I don't see myself as a, you know, in brackets as a chronicler of anything. I'm not that self, you know, I have a more modest view of myself. So I really just respond to the things that I find interesting, that on two tracks, one that I'm personally being impacted by.

So I'm not really like an outsider viewing, like, what will I cover next or what topics should I address, but what's really impacting me personally, I was hugely invested in Napster. I mean, I was going into my office on weekends and powering every single computer up all weekend onto Napster for the better part of a year. I mean, Fanning laughed at me when I met him, but -

CINDY COHN  
Luckily, the statute of limitations may have run on that, that's good.

ALEX WINTER
Yeah, exactly. 

JASON KELLEY  
Yeah, I'm sure you're not alone.

ALEX WINTER
Yeah, but I mean as I told Don Ienner when I did the movie I was like I was like dude I'd already bought all this music like nine times over on vinyl, on cassette, on CD. I think I even had elcasets at one point. So the record industry still owes me money as far as I’m concerned.

CINDY COHN
I agree.

ALEX WINTER
But no, it was really a personal investment. Even, you know, my interest in the blockchain and Bitcoin, which I have mixed feelings about, I really tried to cover that almost more from a political angle. I was interested, same with DeepWeb in a way, but I was interested in how the sort of counter narrators were building online and how people were trying to create systems and spaces online once online became corporatized, which it really did as soon as the web appeared, what did people do in response to the corporatization of these spaces? 

And that's why I was covering Lowry Love's case in England, and eventually Barrett Brown's case, and then the Silk Road, which I was mostly interested in for the same reason as Napster, which was, who were these people, what were they talking about, what drew them to this space, because it was a very clunky, clumsy way to buy drugs, if that was really what you wanted to do, and Bitcoin is a terrible tool for crime, as everyone now, I think, knows, but didn't so well back then.

So what was really compelling people, and a lot of that was, again, it was Silk Road was very much like the sort of alt rec world of the early Usenet days. A lot of divergent voices and politics and, and things like that. 

So YouTube is different because it was, Gayle Ayn Hurd had approached me and asked me if I wanted to tackle this with her, the producer. And I'd been looking at Google, largely. And that was why I had a personal interest. And I've got three boys, all of whom came up in the YouTube generations. They all moved off of regular TV and onto their laptops at a certain point in their childhood, and just were on YouTube for everything.

So I wanted corporatization of the internet, about what was the societal impact of the fact that our, our largest online community, which is YouTube, is owned by arguably the largest corporation on the planet, which is also a monopoly, which is also a black box.

And what does that mean? What are the societal  implications of that? So that was the kind of motive there, but it still was looking at it as a community largely.

CINDY COHN
So the conceit of the show is that we're trying to fix the internet and I want to know, you've done a lot to shine these stories in different directions, but what does it look like if we get it right? What are the things that we will see if we build the kind of online communities that are better than I think the ones that are getting the most attention now.

ALEX WINTER
I think that, you know, I've spent the last two years since I made the film and up until very recently on the road, trying to answer that question for myself, really, because I don't believe I have the answer that I need to bestow upon the world. I have a lot of questions, yeah. I do have an opinion. 

But right now, I mean, I generally feel like many people do that we slept – I mean, you all didn't, but many people slept on the last 20 years, right? And so there's a kind of reckoning now because we let these corporations get away with murder, literally and figuratively. And I think that we're in a phase of debunking various myths, and I think that's going to take some time before we can actually even do the work to make the internet better. 

But I think, you know, I have a big problem, a large thesis that I had in making The YouTube Effect was to kind of debunk the theory of the rabbit hole and the algorithm as being some kind of all encompassing evil. Because I think, sort of like we're seeing in AI now with this rhetoric about AI is going to kill everybody. To me, those are very agenda based narratives. They convince the public that this is all beyond them, and they should just go back to their homes, and keep buying things and eating food, and ignore these thorny areas of which they have no expertise, and leave it to the experts.

And of course, that means the status quo is upheld. The corporations keep doing whatever they want and they have no oversight, which is what they want. Every time Sam Altman says, AI is going to kill the world, he's just saying, Open AI is a black box, please leave us alone and let us make lots of money and go away. And that's all that means. So I think that we have to start looking at the internet and technology as being run by people. There aren't even that many people running it, there's only a handful of people running the whole damn thing for the most part. They have agendas, they have motives, they have political affiliations, they have capitalist orientation.

So I think really being able to start looking at the internet in a much more specific way, I know that you all have been doing this for a long time, most people do not. So I think more of that, more calling people on the carpet, more specificity. 

The other thing that we're seeing, and again, I'm preaching to the choir here with EFF, but like any time the public or the government or the media wakes up to something that they're behind, their inclination of how to fix it is way wrong, right?

And so that's the other place that we're at right now, like with COSA and the DSA and the Section 230 reform discussions, and they're bananas. And you feel like you're screaming into a chasm, right? Because if you say these things, people treat you like you're some kind of lunatic. Like, what do you mean you don't want to turn off Section 230? That would solve everything! I'm like, it wouldn't, it would just break the internet! So I feel a little, you know, like a Cassandra, but you do feel like you're yowling into a void. 

And so I do think that it's going to take a minute to fix the internet. And I think that one of the things that I think we'll get there, I think the new generations are smarter, the stakes are higher for them. You know kids in school… Well, I don't think the internet or social media is necessarily bad for kids, like, full stopping. There's a lot of propaganda there, but I think that, you know, they don't want harms. They want a safer environment for themselves. They don't want to stop using these platforms. They just want them to work better. 

But what's happened in the last couple of years, I think is a good thing, is that people are breaking off and forming their own communities again, even kids, like even my teenagers started doing it during COVID. Even on Discord, they would create their own servers, no one could get on it but them. There was no danger of, like, being infiltrated by crazy people. All their friends were there. They could bring other friends in, they could talk about whatever issues they wanted to talk about. So there's a kind of return to, of kind of fractured or fragmented or smaller set of communities.

And I think if the internet continues to go that way, that's a good thing, right? That you don't have to be on Tik TOK or YouTube or whatever to find your people. And I think for grownups would be the silver lining of what happened with Twitter, with, you know, Elon Musk buying it and immediately turning it into a Nazi crash pad is that the average adult realized they didn't have to be there either, right? That they don't have to just use one place that the internet is filled with little communities that they could go to to talk to their friends. 

So I think we're back in this kind of Wild West like we almost were pre-web and at the beginning of the web and I think that's good.  But I do think there's an enormous amount of misinformation and some very bad policy all over the world that is going to cause a lot of harm.

CINDY COHN
I mean, that's kind of my challenge to you is once we've realized that things are broken, how do we evaluate all the people who are coming in and claiming that they have the fix? And you know, in The YouTube effect, you talked to Carrie Goldberg. She has a lot of passion.

I think she's wrong about the answer. She's, I think, done a very good job illuminating some of the problems, especially for specific communities, people facing domestic violence and doxing and things like that. But she's rushed to a really dangerous answer for the internet overall. 

So I guess my challenge is, how do we help people think critically about not just the problems, but the potential issues with solutions? You know, the TikTok bans are something that's going on across the country now, and it feels like the Napster days, right?

ALEX WINTER
Yeah, totally.

CINDY COHN
People have focused on a particular issue and used it to try to say, Oh, we're just going to ban this. And all the people who use this technology for all the things that are not even remotely related to the problem are going to be impacted by this “ban-first” strategy.

ALEX WINTER
Yeah. I mean, it's media literacy. It's digital literacy. One of the most despairing things for me making docs in this space is how much prejudice there is to making docs in this space. You know, people consider the internet, especially, you know, a huge swath of, because obviously the far right has their agenda, which is just to silence everybody they don't agree with, right? I mean, the left can do the same thing, but the right is very good at it.  

The left, where they make mistakes, or, you know, center to left, is that they're ignorant about how these technologies work, and so their solutions are wrong. We see that over and over. They have really good intentions, but the solutions are wrong, and they don't actually make sense to how these technologies work. We're seeing that in AI. That was an area that I was trying to do as much work as I could in during the The Hollywood strike to educate people about AI'because they were so completely misinformed and their fixes were not fixes. They were not effective and they would not be legally binding. And it was despairing only because it's kind of frowned upon to say anything about technology other than don't use it.

CINDY COHN
Yeah.

ALEX WINTER
Right? Like, even other documentaries are like the thesis is like, well, just, you know, tell your kids they can't be on, like, tell them to read more literature.

Right? And it just drives me crazy because I'm like, I'm a progressive lefty and my kids are all online and guess what? They still read books and like, play music and go outside. So it's this kind of very binary black or white attitude towards technology that like, ‘Oh, it's just bad. Why can't we go back to the days?’

CINDY COHN
And I think there's a false sense that if we just could turn back the clock pre internet, everything was perfect. Right? My friend Cory Doctorow talks about this, like how we need to build the great new world, not the good old world. And I think that's true even for, you know, Internet oldies like you and me who are thinking about maybe the 80s and 90s.

Like, I think we need to embrace where we are now and then build the better world forward. Now, I agree with you strongly about decentralization in smaller communities. As somebody who cares about free speech and privacy, I don't see a way to solve the free speech and privacy problems of the giant platforms.

We're not going to get better dictators. We need to get rid of the dictators and make a lot more smaller, not necessarily smaller, but different spaces, differently governed spaces. But I agree with you that there is this rush to kind of turn back the clock and I think we should try to turn it forward. And again, I kind of want to push you a little bit. What does the turning it forward world look like?

ALEX WINTER
I mean, I have really strong opinions about that. I mean, thankfully, my kids are very tech savvy, like any kid. And I pay attention to what they're doing, and I find it fascinating. And the thing about thinking backwards is that it's a losing proposition. Because the world will leave you behind.

Because the world's not going to go backwards. And the world is only going to go forward. And so you either have a say in what that looks like, or you don't. 

I think two things have to happen. One is media literacy and a sort of weakening of this narrative that it's all bad, so that more people, intelligent people, are getting involved in the future. I think that will help adults get immersed into new technologies and new communities and what's going on. I think at the same time that we have to be working harder to attack the tech monopolies. 

I think being involved as opposed to being, um, abstinent. is really, really important. Um, and I think more of that will happen with new generations, so uh, and because then your eyes and your ears are open, and you'll find new communities and, and the like, but at the same time we have to work much harder at um, uh, this idea that we're allowing the big tech to police themselves is just ludicrous, and there's still the world that we're in, and it just drives me crazy and Uh, you know, they have one agenda, which is profit, and they don't care about anything else, and, and power.

And I think that's the danger of AI. I mean, it's not the, we're not all gonna die by robots. It's just, it's just this sort of capitalist machine is just gonna roll along unchecked. That's the problem, and it will eat labor, and it will eat other companies, and that's the problem.

CINDY COHN  
I mean, I think that's one of the tricky parts about, you know, kind of the, the Sam Altman shift, right, from don't regulate us to please regulate us. Behind that, please regulate us is, you know, and we'll, we'll tell you what the regulations look like because we're the only ones, these giant gurus who can understand enough about it to figure out how to regulate us.

And I just think that's, you know, it's, it's important to recognize that it's a pivot, but I think you could get tricked into thinking that's actually better. And I don't actually think it is.

ALEX WINTER
It’s a 100 percent agenda based. I mean, it's not only not better, it's completely self serving. And I think that as long as we are following these people as opposed to leading them, we're going to have a problem.

CINDY COHN:
Absolutely.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Alex Winter about YouTube.

ALEX WINTER
There's a lot of information there that's of extreme value, medical, artistic,historical, political. In the film, we go to great length to show that Caleb Kane, who got kind of pulled into and, and radicalized, um, by the, the proliferation of far right, um, neo and even neo Nazi and nationalist, uh, white supremacist content, which is still proliferate on YouTube, um, because it really is not algorithm oriented, it’s business and incentive based, how he himself was unindoctrinated by ContraPoints, by Natalie Wynn's channel. 

And you have to understand that, you know, more teenagers watch YouTube than Netflix. Like, it is everything. Iit is by an order of magnitude, so much more of how they spend their time, um, consuming media than anything else. And they're watching their friends talk, they're watching political speakers talk, they're watching, you know, my son who's like, his various interests from photography to weightlifting to whatever, he's young. All of that's coming from YouTube. All of it.

And they're pretty good at discerning the crap from, you know, unless like now it's like a lot of the studies show you have to be generally predisposed to this kind of content to really go down, the sort of darker areas those younger people can be.

You know, I often say that the greatest solution to people who end up getting radicalized on YouTube is more YouTube. Right? Is to find the people on YouTube who are doing good. And I think that's one of the big misunderstandings about disinfo is that you can consume good sources. You just have to find them. And people are actually better at discerning truth from lies if that's really what they want to do as opposed to, like, I just want to get a wash in QAnon or whatever. 

I think YouTube started not necessarily with pure intentions, but I think that they did start with some good intentions in terms of intentionally democratizing the landscape and voices and allowing people in marginalized groups, and under autocratic governments. They allowed and they, and they promoted that content and they created the age of the democratized influencer.

That was intentional. And I would argue that they did a better job of that than my industry did. And I think my industry followed their lead. I think the diversity initiatives in Hollywood came after Hollywood, because Hollywood's Like everyone else is driven by money only and they were like, Oh my God, there are these giant trans and African and Chinese influencers that have huge audiences, we should start allowing more people to have a voice in our business too. Cause we'll make money off of them. But I think that now, YouTube has grown so big and so far beyond them, and it's making them so much money and they're so incentivized to promote disinformation, propaganda, sort of violent, um, content because it, it just makes so much money for them on the ad side, uh, that it's sort of a runaway train at this point.

CINDY COHN
One of the things that EFF has taken a stand on is about banning behavioral advertising. And I think one of the things you did in The YouTube Effect is kind of take a hard look at, you know, how, how big a role the algorithm is actually playing. And I think the movie kind of points that it's not as big a role as people who, uh, who want an easy answer to the problem are, are saying.

We've been thinking about this from the privacy perspective, and we decided that behavioral advertising was behind so many of the problems we had, and I wondered, um, how you think about that, because that is the kind of tracking and targeting that feeds some of those algorithms, but it does a lot more.

ALEX WINTER
Yeah, I think that there's absolutely no doubt for all the hue and cry that they can't moderate their content. And I think that we're beginning, again, this is an area you, you, that you, that EFF specifically specializes in. But I think in terms of the area of free speech, and what constitutes free speech as opposed to what they could actually be doing to mitigate harms is very nuanced.

And it serves them to say that it is not. That it's not nuanced and it's either, either they're going to be shackling free speech or they should be left alone to do whatever they want, which is make money off of advertising, a lot of which is harmful. So I think getting into the weeds on that is extremely important.

You know, a recent example was just how they stopped deplatforming all the Stop the Steal content, which they were doing very successfully. The just flat out  you know, uh, election 2020 election propaganda and, you know, and that gets people hurt. I mean, it can get people killed and it's not, it's really not hard to do, um, but they make more money if they allow this kind of rampant, aggressive, propagandized advertising as well as content on their platform.

I just think that we have to be looking at advertising and how it functions in a very granular way, because these are,  the whole thesis of YouTube, such as we had one, is that this is not about an algorithm, it's about a business model. 

These are business incentives, it's no different, I've been saying this everywhere, it's like, it's exactly the same as, as the, the Hurst and Pulitzer wars of the late 1800s, it's the same. It's just, we want to make money. We know what attracts eyeballs. We want to advertise and make money from ad revenue from pumping out this garbage because people eat it up. It's really similar to that. That doesn't require an algorithm. 

CINDY COHN
My dream is Alex Winter makes a movie that helps us evaluate all the things that people who are worried about the internet are jumping in to say that we ought to do, and helps give people that kind of evaluative  power, because we do see over and over again this rush to go to censorship, which, you know, is problematic, for free expression, but also just won't work, this kind of gliding over the idea that privacy has anything to do with online harms and that standing up for privacy will do anything.

I just feel like sometimes, this literacy place needs to be both about the problems and about critically thinking about the things that are being put forward as solutions.

ALEX WINTER
Yeah, I mean, I've been writing a lot about that for the last two years. I've written, I think, I don't know, countless op eds. And there are way smarter people than me, like you all and Cory Doctorow, writing about this like crazy. And I think all of that is having an impact. I think that we are building the building blocks of proper internet literacy are being set. 

CINDY COHN
Well I appreciate that you've got three kids who are, you know, healthy and happy using the internet because I think those stories get overlooked as well. Not that there aren't real harms. It's just that there's this baby with the bathwater kind of approach that we find in policymaking.

ALEX WINTER
Yeah, completely. So I think that people feel like their arms are being twisted. That they have to say these hyper negative things, or fall in line with these narratives. You know, a movie requires characters, right? And I would need a court case or something to follow to find the way in and I've always got my eyes on that. But I do think we're at it. We're at a kind of a critical point.

It's really funny because when I made this film I'm friends with a lot of different film critics. I've just been around a long time I like, you know reading good film criticism and one of them who I respect greatly was like I don't want to review your movie because I really didn't like it and I don't want to give you a really bad review.

And I said, well, why didn't you like it? It's like, because I did just didn't like your perspective. And I was like, well, what didn't you like about my replicas? Like, well, you just weren't hard enough on YouTube. Like you, you didn't just come right out and say, they're just terrible and no one should be using it.

And I was like, You're the problem. and here's so much of that, um, that I feel like there is a, uh, you know, there's a bias that is going to take time to overcome. No matter what anyone says or whatever film anyone makes, there's just, we just have to kind of keep chipping away at it.

JASON KELLEY
Well, it's a shame we didn't get a chance to talk to him about Frank Zappa. But what we did talk to him about was probably more interesting to our audience. The thing that stood out to me was the way he sees these technologies and sort of focuses his documentaries on the communities that they facilitate.

And that was just sort of a, I think, useful way to think about, you know, everything from the deep web to blockchain to YouTube. To Napster, just like he sees these as building communities and those communities are not necessarily good or bad, but they have some really positive elements and that led him to this really interesting idea of, of a future of smaller communities, which I think, I think we all agree with.

Does that sound sort of like what you pulled away from the conversation, Cindy?

CINDY COHN
I think that's right. And I also think he was really smart at noticing the difference between what it was like to be inside some of those communities and how they got portrayed in broader society. And pointing out that when corporate interests, who were the copyright interests, saw what was happening on Napster, they very quickly put together a narrative that everybody was pirates, that was very different than how it felt to be inside that community and having access to all of that information and that disconnect, you know, what happens when the people who control our broader societal conversation, who are often corporate interests with their own commercial interests at heart.

And what it's like to be inside the communities is what connected the Silk Road story with the Napster story. And in some ways YouTube is interesting because it's actually gigantic. It's not a little corner of the internet, but yet, I think he's trying to lift up, you know, both the issues that we see in YouTube that are problematic, but also all the other things inside YouTube that are not problematic and as he pointed out in the story about Caleb Cain, you know, can be part of the solution to pulling people out of the harms. 

So I really appreciate this focus. I think it really hearkens back to, you know, one of the coolest things about the internet when it first came along was this idea that we could build communities free of distance and outside of the corporate spaces.

JASON KELLEY
Yeah. And the point you're making about his recognition of. Who gets to decide what's to blame, I think leads us right to the conversation around YouTube, which is it's easy to blame the algorithm when what's actually driving a lot of the problems we see with the site are corporate interests and engagement with the kind of content that gets people riled up and also makes a lot of money.

And I just love that he's able to sort of parse out these nuances in a way that surprisingly few people do, um, you know, across media and journalism and certainly in unfortunately government.

CINDY COHN
Yeah, and I think that, you know, it's, it's fun to have a conversation with somebody who kind of gets it at this level about the problems with, and he, you know, name checked issues that EFF has been working on for a long time, whether that's COSA or Section 230 or algorithmic issues. About how wrongheaded the solutions are and how it kind of drives it.

I appreciate that it kind of drives him crazy in the way it drives me crazy that once you've articulated the harms, people seem to rush towards solutions, or at least are pushed towards solutions that are not getting out of this corporate control, but rather in some ways putting us deeper in that.

And he's already seeing that in the AI push for regulation. I think he's exactly right about that. I don't know if I convinced him to make his next movie about all of these solutions and how to evaluate them. I'll have to keep trying. He may not, that may not be where he gets his inspiration.

JASON KELLEY
We'll see, I mean, at least if nothing else, EFF is in many of the documentaries that he has made and my guess is that will continue to be a voice of reason in the ones he makes in the future.

CINDY COHN
I really appreciate that Alex has taken his skills and talents and platforms to really lift up the kind of ordinary people who are finding community online and help us find ways to keep that part, and even lift it up as we move into the future.

JASON KELLEY

Thanks for joining us for this episode of how to fix the internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. 

In this episode you heard Perspectives by J.Lang featuring Sackjo22 and Admiral Bob 

You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

Podcast Episode: Building a Tactile Internet

Blind and low-vision people have experienced remarkable gains in information literacy because of digital technologies, like being able to access an online library offering more than 1.2 million books that can be translated into text-to-speech or digital Braille. But it can be a lot harder to come by an accessible map of a neighborhood they want to visit, or any simple diagram, due to limited availability of tactile graphics equipment, design inaccessibility, and publishing practices.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Chancey Fleet wants a technological future that’s more organically attuned to people’s needs, which requires including people with disabilities in every step of the development and deployment process. She speaks with EFF’s Cindy Cohn and Jason Kelley about building an internet that’s just and useful for all, and why this must include giving blind and low-vision people the discretion to decide when and how to engage artificial intelligence tools to solve accessibility problems and surmount barriers. 

In this episode you’ll learn about: 

  • The importance of creating an internet that’s not text-only, but that incorporates tactile images and other technology to give everyone a richer, more fulfilling experience. 
  • Why AI-powered visual description apps still need human auditing. 
  • How inclusiveness in tech development is always a work in progress. 
  • Why we must prepare people with the self-confidence, literacy, and low-tech skills they need to get everything they can out of even the most optimally designed technology. 
  • Making it easier for everyone to travel the two-way street between enjoyment and productivity online. 

Chancey Fleet’s writing, organizing and advocacy explores how cloud-connected accessibility tools benefit and harm, empower and expose communities of disability. She is the Assistive Technology Coordinator at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library, where she founded and maintains the Dimensions Project, a free open lab for the exploration and creation of accessible images, models and data representations through tactile graphics, 3D models and nonvisual approaches to coding, CAD and “visual” arts. She is a former fellow and current affiliate-in-residence at Data & Society; she is president of the National Federation of the Blind’s Assistive Technology Trainers Division; and she was recognized as a 2017 Library Journal Mover and Shaker. 

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

CHANCEY FLEET
The fact is, as I see it, that if you are presented with what seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. What I've already noticed is blind people in droves dumping their descriptions of personal images, sentimental images, generated by AI onto social media, and there is a certain hyper-normative quality to the language. Any scene that contains a child or a dog is heartwarming. Any sunset or sunrise is vibrant. Anything with a couch and a lamp is calm or cozy. Idiosyncrasies are left by the wayside.

Unflattering little aspects of an image are often unremarked upon, and I feel like I'm being served some Ikea pressboard of reality, and it is so much better than anything that we've had before on demand without having to involve a sighted human being. And it's good enough to mail, kind of like a Hallmark card, but do I want the totality of digital description online to slide into this hyper normative, serene anodyne description? I do not. I think that we need to do something about it.

CINDY COHN
That's Chancey Fleet describing one of the problems that has arisen as AI is increasingly used in assistive technologies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we start to get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fight when they DO go wrong. But this show is about optimism, hope and bright ideas for the future.

According to a National Health Interview Survey from 2018, more than 32 million Americans reported that they had vision loss, including blindness. And as our population continues to age, this number only increases. And a big part of fixing the internet means fixing it so that it works properly for everyone who needs and wants to use it – blind, sighted, and everyone in between.

JASON KELLEY
Our guest today is Chancey Fleet. She is the Assistive Technology Coordinator for the New York Public Library, where she teaches people how to use assistive technology to make their lives easier and more accessible. She’s also the president of the Assistive Technology Trainer’s Division for the National Federation of the Blind. 

CINDY COHN
We started our conversation as we often do – by asking Chancey what the world could be like if we started getting it right for blind and low vision people. 

CHANCEY FLEET
The unifying feature of rightness for blind and low vision folks is that we encounter a digital commons that plays to our strengths, and that means that it's easy for us to find information that we can access and understand. That might mean that web content always has semantic structure that includes things like headings for navigation. 

But it also includes things that we don't have much of right now, like a non-visual way to access maps and diagrams and images, because of course, the internet hasn't been in text only mode for the rest of us for a really long time.

I think getting the internet right also means that we're able to find each other and build community because we're a really low incidence disability. So odds are your colleague, your neighbor, your family members aren't blind or low-vision, and so we really have to learn and produce knowledge and circulate knowledge with each other. And when the internet gets it right, that's something that's easy for us to do. 

CINDY COHN
I think that's so right. And it's honestly consistent with, I think, what every community wants, right? I mean, the Internet's highest and best use is to connect us to the people we wanna be connected to. And the way that it works best is if the people who are the users of it, the people who are relying on it have, not just a voice, but a role in how this works.

I've heard you talk about that in the context of what you call ‘ghostwritten code.’ Do you wanna explain what that is? Am I right? I think that's one of the things that has concerned you.

CHANCEY FLEET
Yeah, you are right. A lot of people who work in design and development are used to thinking of blind and disabled people in terms of user stories and personas, and they may know on paper what the web content accessibility guidelines, for instance, say that a blind or low vision user or a keyboard-only user, or a switch user needs. The problems crop up when they interpret the concrete aspects of those guidelines without having a lived experience that leads them to understand usability in the real world.

I can give you one example. A few years ago, Google rolled out a transcribe feature within Google Translate, which I was personally super excited about. And by the way, I'm a refreshable Braille user, which means I use a Braille display with my iPhone. And if you were running VoiceOver, the screen reader for iPhone, when you launched the transcribed feature, it actually scolded you that it would not proceed, that it would not transcribe, until you plugged in headphones because well-meaning developers and designers thought, well, VoiceOver users have phones that talk, and if those phones are talking, it's going to ruin the transcription, so we'll just prevent that from happening. They didn't know about me. They didn't know about refreshable Braille users or users that might have another way to use VoiceOver that didn't involve speech out loud.

And so that, I guess you could call it a bug, I would call it a service denial, was around for a few weeks until our community communicated back about it, and if there had been blind people in the room or Braille users in the room, that would've never happened.

JASON KELLEY
I think this will be really interesting and useful for the designers at EFF who think a lot in user personas and also about accessibility. And I think just hearing what happens when you get it wrong and how simple the mistake can be is really useful I think for folks to think about inclusion and also just how essential it is to make sure there's more in-depth testing and personas as you're saying. 

I wanna talk a little bit about the variety of things you brought up in your opening salvo, which I think we're gonna cover a lot of. But one of the points you mentioned was, or maybe you didn't say it this way in the opening, but you've written about it, and talked about it, which is tactile graphics and something that's called the problem of image poverty online.

And that basically, as you mentioned, the internet is a primarily text-based experience for blind and low-vision users. But there are these tools that, in a better future, will be more accessible, both available and usable and effective. And I wonder if you could talk about some of those tools like tablets and 3D printers and things like that.

CHANCEY FLEET
So it's wild to me the way that our access to information as blind folks has evolved given the tools that we've had. So, since the eighties or nineties we've had Braille embossers that are also capable of creating tactile graphics, which is a fancy way to say raise drawings.

A graphics-capable embosser can emboss up to a hundred dots per inch. So if you look at it. Visually, it's a bit pixelated, but it approaches the limits of tactile perception. And in this way, we can experience media that includes maybe braille in the form of labels, but also different line types, dotted lines, dashed lines, textured infills.

Tactile design is a little bit different from visual design because our perceptual acuity is lower. It's good to scale things up. And it's good to declutter items. We may separate layers of information out to separate graphics. If Braille were print, it would be a thirty-six point font, so we use abbreviations liberally when we need to squeeze some braille onto an image.

And of course, we can't use color to communicate anything semantic. So when the idea of a red line or a blue line goes away we start thinking about a solid line versus a dashed or dotted line. When we think about a pie chart, we think about maybe textures or labels in place of colors. But what's interesting to me is that although tactile graphics equipment has been on the market since at least the eighties, probably someone will come along and correct me that it's even sooner than that.

Most of that equipment is on the wrong side of an institutional locked door, so it belongs to a disability services office in a university. It belongs to the makers of standardized tests. It belongs to publishers. I've often heard my library patrons say something along the lines of, oh yeah, there was a graphics embosser in my school, but I never got to touch it, I never got to use it. 

Sometimes the software that's used to produce tactile graphics is, in itself, inaccessible. And so I think blind people have experienced pretty remarkable gains in general in regard to our information literacy because of digital technologies and the internet. For example, I can go to Bookshare.org, which is an online library for people with print disabilities and have my choice of a million books right now.

And those can automatically be translated to text-to-speech or to digital braille. But if I want a map of the neighborhood that I'm going to visit tomorrow, or if I want a glimpse of how electoral races play out, that can be really hard to come by. And I think it is a combination of the limited availability of tactile graphics equipment, inaccessibility of design and publishing practices for tactile graphics, and then this sort of vicious circular lack of demand that happens when people don't have access. 

When I ask most blind people, they'll say that they've maybe encountered two or three tactile graphics in the past year, maybe less. Um, a lot of us got more than that during our K-12 instruction. But what I find, at least for myself, is that when tactile graphics are so strongly associated with standardized testing and homework and never associated with my own curiosity or fun or playfulness or exploration, for a long time, that actually dampened down my desire to experience tactile graphics.

And so most of us would say probably, if I can be so bold as to think that I speak for the community for a second, most of us would say that yes, we have the right to an accessible web. Yes, we have the right to digital text. I think far fewer of us are comfortable saying, or understand the power of saying we also have a right to images and so in the best possible version of the internet that I imagine we have three things. We have tactile graphics equipment that is bought more frequently, and so there are economies of scale and the prices come down. We have tactile design and graphics design programs that are more accessible than what's on the market right now. And critically, we have enough access to tactile graphics online that people can find the kind of information that engages and compels them. And within 10 years or so, people are saying, we don't live in a text-only world, images aren't inherently visual, they are spacial, and we have a right to them.

JASON KELLEY
I read a piece that you had written about the kind of importance of data visualizations during the pandemic and how important it was for that sort of flatten the curve graph to be able to be seen or, or touched in this case, um, by as many people as possible. But, and, and that really struck me, but I also love this idea that we shouldn't have to get these tools only because they're necessary, but also because people deserve to be able to enjoy the experience of the internet.

CHANCEY FLEET
Right, and you never know when enjoyment is going to lead to something productive or when something productive you're doing spins out into enjoyment. Somebody sent me a book of tactile origami diagrams. It's a four volume book with maybe 40 models in it, and I've been working through them all. I can do almost all of them now, and it's really hard as a blind person to go online and find origami instructions that make any sense from an accessibility perspective.

There is a wonderful website called AccessOrigami.com. Lindy Vandermeer out of South Africa does great descriptive origami instruction. So it's all text directing you step by step by step. But the thing is, I'm a spatial thinker. I'm what you might think of as a visual thinker, and so I can get more out of a diagram that's showing me where to flip dot A to dot B, then I can in reading three paragraphs. It's faster, it's more fluid, it's more fun. And so I treasure this book and unfortunately every other blind person I show it to also treasures it and can't have it 'cause I've got one copy. And I just imagine a world in which, when there's a diagram on screen, we can use some kind of process to re-render it in a more optimal format for tactile exploration. That might mean AI or machine learning, and we can talk a little bit about that later. But a lot of what we learn about. What we're good at, what we enjoy, want, what we want more of in life. You know, we do find online these days, and I want to be able to dive into those moments of curiosity and interest without having to first engineer a seven step plan to get access to whatever it is that's on my screen.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Chancey Fleet.

CINDY COHN
So let's talk a little bit about AI and I'd love to hear your perspective on where AI is gonna be helpful and where we ought to be cautious.

CHANCEY FLEET
So if you are blind and reasonably online and you have a smartphone and you're somebody that's comfortable enough with your smartphone that like you download apps on a discretionary basis, there's a good chance that you've heard of a new feature in this app, be my eyes called be my AI, and it's a ChatGPT with computer vision powered describer.

You aim your camera at something, wait a few seconds, and a fairly rich description comes back. It's more detailed and nuanced than anything that AI or machine learning has delivered before, and so it strikes a lot of us as transformational and or uncanny, and it allows us to grab glimpses of what I would call a hypothesized visual world because as we all know, these AI make up stories out of whole cloth and include details that aren't there, and skip details that to the average human observer would be obviously relevant. So I can know that the description I'm getting is probably not prioritized and detailed in quite the same way that a human describer would approach it.

So what's interesting to me is that, since interconnected blind folks have such a dense social graph, we are all sort of diving into this together and advising each other on what's going well and what's not. And I think that a lot of us are deriving authentic value from this experience as bounded by caveats as it is. At the same time, I fear that when this technology scales, which it will, if other forces don't counteract it, it may become a convincing enough business case that organizations and institutions can skip. Human authoring of alt text to describe images online and substitute these rich seeming descriptions that are generated by an AI, and even if that's done in such a way that a human auditor can go in and make changes.

The fact is, as I see it, that if you are presented with. What seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. 

CINDY COHN
I think what I hear in the answer is it can be an augment to the humans doing the describing, um, but not a replacement for, and that's where the, you know, but it's cheaper part comes in. Right. And I think keeping our North Star on the, you know, using these systems in ways that assist people rather than replace people is coming up over and over again in the conversations around AI, and I'm hearing it in what you're saying as well.

CHANCEY FLEET
Absolutely, and let me say as a positive it is both my due diligence as an educator and my personal joy to experiment with moments where AI technologies can make it easier for me to find information or learn things. For example, if I wanna get a quick visual description of the Bluebird trains that the MTA used to run, that's a question that I might ask AI.

I never would've bothered a human being with it. It was not central enough. But if I'm reading something and I want a quick visual description to fill it in, I'll do that.

I also really love using AI tools to look up questions about different artistic or architectural styles, or even questions about code.

I'm studying Python right now because when I go to look for information online on these subjects, often I'm finding websites that are riddled with. Lack of semantic structure that have graphics that are totally unlabeled, that have carousels, that are hard for screen reader users to navigate. And so one really powerful and compelling thing that current Conversational AI offers is that it lives in a text box and it won't violate the conventions of a chat by throwing a bunch of unwanted visual or structural clutter my way.

And when I just want an answer and I'm willing to grant myself that I'm going to have to live with the consequences of trusting that answer, or do some lateral reference, do some double checking, it can be worth my while. And in the best possible world moving forward, I'd like us to be able to harness that efficiency and that facility that conversational AI has for avoiding the hyper visual in a way that empowers us, but doesn't foreclose opportunities to find things out in other ways.

CINDY COHN
As you're describing it, I'm envisioning, you know, my drunk friend, right? They might do okay telling me stuff, but I wouldn't rely on them for stuff that really matters.

CHANCEY FLEET
Exactly.

CINDY COHN
You've also talked a little bit about the role of data privacy and consent and the special concerns that blind people have around some of the technologies that are offered to them. But making sure that consent is real. I'd love for you to talk a little bit about that.

CHANCEY FLEET
When AI is deployed on the server side to fix accessibility problems in lieu of baking, accessibility in from the ground up in a website or an application, that does a couple of things. It avoids changing the culture at the company, the customer company itself, around accessibility. It also involves an ongoing cost and technology debt to the overlay company that an organization is using and it builds in the need for ongoing supervision of the AI. So in a lot of ways, I think that that's not optimal. What I think is optimal is for developers and designers, perhaps, to use AI tools to flag issues in need of human remediation, and to use AI tools for education to speed up their immersion into accessibility and usability concepts.

You know, AI can be used to make short work of things that used to take a little bit more time. When it comes to deploying AI tools to solve accessibility problems, I think that that is a suite of tools that is best left to the discretion of the user. So we can decide, on the user side, for example, when to turn on a browser extension that tries to make those remediations. Because when they're made for us at scale, that doesn't happen with our consent and it can have a lot of collateral impacts that organizations might not expect.

JASON KELLEY
The points you're making about being involved in different parts of the process. Right. It's clear that people that use these tools or that, that actually these tools are designed for should be able to decide when to deploy them.

And it's also clear that they should be more involved, as you've mentioned a few times, in the creation. And I wanted to talk a little bit about that idea of inclusion because it's sort of how we get to a place where consent is  actually, truly given. 

And it's also how we get to a place where these tools that are created do what they're supposed to do, and the companies that you're describing, um, build the, the web, the way that it should be built so that people can can access it.

We have to have inclusion in every step of the process to get to that place where these, all of these tools and the web and, and everything we're talking about actually works for everyone. Is inclusion sort of across the spectrum a solution that you see as well?

CHANCEY FLEET
I would say that inclusion is never a solution because inclusion is a practice and a process. It's something that's never done. It's never achieved, and it's never comprehensive and perfect. 

What I see as my role as an educator, when it comes to inclusion, is meeting people where they are trying to raise awareness – among library patrons and everyone else – I serve about what technologies are available and the costs and benefits of each, and helping people road map a path from their goals and their intentions to achieving the things that they want to do.

And so I think of inclusion as sort of a guiding frame and a constant set of questions that I ask myself about what I'm noticing, what I may not be noticing, what I might be missing, who's coming in, for example, for tech lessons, versus who we're not reaching. And how the goals of the people I serve might differ from my goals for them.

And it's all kind of a spider web of things that add up to inclusion as far as I'm concerned.

CINDY COHN
I like that framing of inclusion as kind of a process rather than an end state. And I think that framing is good because I think it really moves away from the checkbox kind of approach to things like, you know, did we get the disabled person in the room? Check! 

Everybody has different goals and different things that work for them and there isn't just one box that can be checked for a lot of these kinds of things.

CHANCEY FLEET
Blind library patrons and blind people in general are as diverse as any library patrons or people in general. And that impacts our literacy levels. It impacts our thoughts and the thoughts of our loved ones about disability. It impacts our educational attainment, and especially for those of us who lose our vision later in life, it impacts how we interact with systems and services.

I would venture to say that at this time in the U.S, if you lose your vision as an adult, or if you grow up blind in a school system, the quality of literacy and travel and independent living instruction you receive is heavily dependent on the quality of the systems and infrastructure around you, who you know, and who you know who is primed to be a disability advocate or a mentor.

And I see such different outcomes when it comes to technology based on those things. And so we can't talk about a best possible world in the technology sphere without also imagining a world that prepares people with the self-confidence, the literacy skills, and the supports for developing low tech skills that are necessary to get everything that one can get out of even the most optimally designed technology. 

A step by step app for walking directions can be as perfect as it gets. But if the person that you are equipping with that app is afraid to step out of their front door and start moving their cane back and forth and listening to the traffic and trusting their reflexes and their instincts because they have been taught how to trust those things, the app won't be used and there'll be people who are unreached and so technology can only succeed to the extent that the people using it are set up to succeed. And I think that that is where a lot of our toughest work resides.

CINDY COHN
We're trying to fix the internet here, but the internet rests on the rest of the world. And if the rest of the world isn't setting people up for success, technology can't swoop in and solve a lot of these problems.

It needs to rest upon a solid foundation. I think that's just a wonderful place to close because all of us sit on top of what John Perry Barlow called meatspace, right, and if meatspace isn't serving us, then the digital world can only, you know, it can't solve for the problems that are not digital.

JASON KELLEY
I would have loved to talk to Chancey for another hour. That was fantastic.

CINDY  COHN
Yeah, that was a really fun conversation. And I have to say, I just love the idea of the internet going tactile, right? That right now it's all very visual, and that we have the technology to make it tactile so that maps and other things that are, you know, pretty hard for people with low vision or blindness to navigate now, but we have technology, some of the, tools that she talked about that really could make the internet something you could feel as well as see? 

JASON KELLEY
Yeah, I didn't know before talking to her that these tools even existed. And when you hear about it, you're like, oh, of course they do. But it was clear, uh, It was clear from what she said that a lot of people don't have access to them. The tools are relatively new and they need to be spread out more.  But when that happens, hopefully that does happen,  it sort of then requires us to rethink how the internet is built in some ways in terms of the hierarchy of text and what kinds of graphics exist and protocols for converting that information into tactile experiences for people. 

CINDY COHN
Yeah, I think so. And  it does sit upon something that she mentioned. I mean, she said these machines exist and have existed for a long time, but they're mainly in libraries or other places where people can't use them in their everyday lives. And, and I think, you know, one of the things that we ended with in the conversation was really important, which is, you know, we're all sitting upon a society that doesn't make a lot of these tools as widely available as they need to. 

And, you know, the good news in that is that the hard problem has been solved, which is how do you build a machine like this? The problem that we ought to be able to address as a society is how do we make it available much more broadly? I use this quote a lot, but you know, the future is here. It's just not evenly distributed. Seemed really, really clear in the way that she talked about these tools that like most blind people have used once or twice in school, but then don't get to use and turn part of their everyday life 

JASON KELLEY
Yeah. The, the way I heard this was that we have this problem solved sort of at an institutional level where you can access these tools at an institution, but not at the individual level. And it's really.  It is helpful to hear and and optimistic to hear that they will exist in theory in people's homes if we can just get that to happen. And I think what was really rare for this conversation is that it, like you said, we actually do have the technology to do these things a lot of times we're talking about what we need to improve or change about the technology and and how that technology doesn't quite exist or will always be problematic and in this case, sure, the technology can always get better, but  it sounds like we're actually  At a point where we have a lot of the problems solved, whether it's using tactile tablets or, um,  creating ways for people to  use technology to guide each other through places, whether that's through like a person, through Be My Eyes or even in some cases an AI with the Be My AI version of that.

But we just haven't gotten to the point where those things work for everyone. And everyone has  a level of technological proficiency that lets them use those things. And that's something that clearly we'll need to work on in the future.

CINDY COHN
Yeah, but she also pointed out the work that needs to be done about making sure that we're continuing to build the tech that actually serves this community. And she, you know, and they're talking about, you know, ghostwritten code and things like that, where, you know, people who don't have the experience are writing things and building things based upon what they think the people who are blind might want. So, you know, on the one hand, there's good news because a lot of really good technology already exists, but I think she also didn't let us off the hook as a society about something that we, we see all across the board, which is, you know, it need, we need to have the direct input of the people who are going to be using the tools in the building of the tools, lest we end up on a whole other path with things that other than what people actually need. And, you know, this is one of the kind of old, you know, what did they say? The lessons will be repeated until they are learned. This is one of those things where over and over again, we find that the need for people who are building technologies to not just talk to the people who are going to be using them, but really embed those people in the development is one of the ways we stay true to our, to our goal, which is to build stuff that will actually be useful to people.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some limited edition merch like tshirts or buttons or stickers and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Attribution 3.0 unported by their creators. In this episode, you heard Probably Shouldn't by J.Lang, commonGround by airtone and Klaus by Skill_Borrower

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN

And I’m Cindy Cohn.

Podcast Episode: Right to Repair Catches the Car

If you buy something—a refrigerator, a car, a tractor, a wheelchair, or a phone—but you can't have the information or parts to fix or modify it, is it really yours? The right to repair movement is based on the belief that you should have the right to use and fix your stuff as you see fit, a philosophy that resonates especially in economically trying times, when people can’t afford to just throw away and replace things.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

 Companies for decades have been tightening their stranglehold on the information and the parts that let owners or independent repair shops fix things, but the pendulum is starting to swing back: New York, Minnesota, California, Colorado, and Oregon are among states that have passed right to repair laws, and it’s on the legislative agenda in dozens of other states. Gay Gordon-Byrne is executive director of The Repair Association, one of the major forces pushing for more and stronger state laws, and for federal reforms as well. She joins EFF’s Cindy Cohn and Jason Kelley to discuss this pivotal moment in the fight for consumers to have the right to products that are repairable and reusable.  

In this episode you’ll learn about: 

  • Why our “planned obsolescence” throwaway culture doesn’t have to be, and shouldn’t be, a technology status quo. 
  • The harm done by “parts pairing:” software barriers used by manufacturers to keep people from installing replacement parts. 
  • Why one major manufacturer put out a user manual in France, but not in other countries including the United States. 
  • How expanded right to repair protections could bring a flood of new local small-business jobs while reducing waste. 
  • The power of uniting disparate voices—farmers, drivers, consumers, hackers, and tinkerers—into a single chorus that can’t be ignored. 

Gay Gordon-Byrne has been executive director of The Repair Association—formerly known as The Digital Right to Repair Coalition—since its founding in 2013, helping lead the fight for the right to repair in Congress and state legislatures. Their credo: If you bought it, you should own it and have the right to use it, modify it, and repair it whenever, wherever, and however you want. Earlier, she had a 40-year career as a vendor, lessor, and used equipment dealer for large commercial IT users; she is the author of "Buying, Supporting and Maintaining Software and Equipment - an IT Manager's Guide to Controlling the Product Lifecycle” (2014), and a Colgate University alumna. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

GAY GORDON-BYRNE
A friend of mine from Boston had his elderly father in a condo in Florida, not uncommon. And when the father went into assisted living, the refrigerator broke and it was out of warranty. So my friend went to Florida, figured out what was wrong, said, ‘Oh, I need a new thermostat,’ ordered the thermostat, stuck around till the thermostat arrived, put it in and it didn't work.

And so he called GE because he bought the part from GE and he says, ‘you didn't provide me, there's a password. I need a password.’ And GE says, ‘Oh, you can't have the password. You have to have a GE authorized tech come in to insert the password.’ And that to me is the ultimate in stupid.

CINDY COHN
That’s Gay Gordon-Byrne with an example of how companies often prevent people from fixing things that they own in ways that are as infuriating as they are absurd.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.  

Our guest today, Gay Gordon-Byrne, is the executive director of The Repair Association, where she has been advocating for years for legislation that will give consumers the right to buy products that are repairable and reusable – rather than things that need to be replaced outright every few years, or as soon as they break. 

CINDY COHN
The Right to Repair is something we fight for a lot at EFF, and a topic that has come up frequently on this podcast. In season three, we spoke to Adam Savage about it.

ADAM SAVAGE
I was trying to fix one of my bathroom faucets a couple of weeks ago, and I called up a Grohee service video of how to repair this faucet. And we all love YouTube for that, right, because anything you want to fix whether it’s your video camera, or this thing, someone has taken it apart. Whether they’re in Micronesia or Australia, it doesn’t matter. But the moment someone figures out that they can make a bunch of dough from that, I’m sure we’d see companies start to say, ‘no, you can’t put up those repair videos, you can only put up these repair videos’ and we all lose when that happens.

JASON KELLEY
In an era where both the cost of living and environmental concerns are top of mind, the right to repair is more important than ever. It addresses both sustainability and affordability concerns.

CINDY COHN
We’re especially excited to talk to Gay right now because Right to Repair is a movement that is on its way up and we have been seeing progress in recent months and years. We started off by asking her where things stand right now in the United States.

GAY GORDON-BYRNE
We've had four states actually pass statutes for Right to Repair, covering a variety of different equipment, and there's 45 states that have introduced right to repair over the past few years, so we expect there will be more bills finishing. Getting them started is easy, getting them over the finish line is hard.

CINDY COHN
Oh, yes. Oh, yes. We just passed a right to repair bill here in California where EFF is based. Can you tell us a little bit about that and do you see it as a harbinger, or just another step along the way?

GAY GORDON-BYRNE
Well, honestly, I see it as another step along the way, because three states actually had already passed laws, in California, Apple decided that they weren't going to object any further to right to repair laws, but they did have some conditions that are kind of unique to California because Apple is so influential in California. But it is a very strong bill for consumer products. It just doesn't extend to non-consumer products.

CINDY COHN
Yeah. That's great. And do you know what made Apple change their mind? Because they had, they had been staunch opponents, right? And EFF has battled with them in various different areas around Section 1201 and other things and, and then it seemed like they changed their minds and I wondered if you had some insights about that.

GAY GORDON-BYRNE
I take full responsibility.

CINDY COHN
Yay! Hey, getting a big company to change their position like that is no small feat and it doesn't happen overnight.

GAY GORDON-BYRNE
Oh, it doesn't happen overnight. And what's interesting is that New York actually passed a bill that Apple tried to negotiate and kind of really didn't get to do it in New York, that starts in January. So there was a pressure point already in place. New York is not an insignificant size state.

And then Minnesota passed a much stronger bill. That also takes effect, I think, I might be wrong on this, I think also in January. And so the wheels were already turning, I think the idea of inevitability had occurred to Apple that they'd be on the wrong side of all their environmental claims if they didn't at least make a little bit more of a sincere effort to make things repairable.

CINDY COHN
Yeah. I mean, they have been horrible about this from the very beginning with, you know with custom kinds of dongles, and difficulty in repairing. And again, we fought them around section 1201, which is the ability to do circumvention so that you can see how something works and build. tools that will let you fix them.

It's just no small feat from where we set to get, to get the winds to change such that even Apple puts their finger up and says, I think the winds are changing. We better get on the right side of history.

GAY GORDON-BYRNE
Yeah, that's what we've been trying to do for the past, when did we get started? I got started in 2010, the organization got started in 2013. So we've been at it a full 10 years as an actual organization, but the problems with Apple and other manufacturers existed long before. So the 1201 problem still exists, and that's the problem that we're trying to move in federally, but oh my God. I thought moving legislation in states was hard and long.

CINDY COHN
Yeah, the federal system is different, and I think that one of the things that we've experienced, though, is when the states start leading, eventually the feds begin to follow. Now, often they follow with the idea that they're going to water down what the states do. That's why, you know, EFF and, and I think a lot of organizations rally around this thing called preemption, which doesn't really sound like a thing you want to rally around, but it ends up being the way in which you make sure that the feds aren't putting the brakes on the states in terms of doing the right things and that you create space for states to be more bold.

It's sometimes not the best thing for a company that has to sell in a bunch of different markets, but it's certainly better than  letting the federal processes come in and essentially damp down what the states are doing.

GAY GORDON-BYRNE
You're totally right. One of our biggest fears is that someone will... We'll actually get a bill moving for Right to Repair, and it's obviously going to be highly lobbied, and we will probably not have the same quality of results as we have in states. So we would like to see more states pass more bills so that it's harder and harder for the federal government to preempt the states.

In the meantime, we're also making sure that the states don't preempt the federal government, which is another source of friction.

CINDY COHN
Oh my gosh.

GAY GORDON-BYRNE
Yeah, preemption is a big problem.

CINDY COHN
It goes both ways. In our, in our Section 1201 fights, we're fighting the Green case, uh, Green vs. Department of Justice, and the big issue there is that while we can get exemptions under 1201 for actual circumvention, the tools that you need  in order to circumvent, you can't get an exception for, and so you have this kind of strange situation in which you technically have the right to repair your device, but nobody can help you do that and nobody can give you the tools to do it. 

So it's this weird, I often, sometimes I call it the, you know, it's legal to be in Arizona, but it's illegal to go to Arizona kind of law. No offense, Arizona.

GAY GORDON-BYRNE
That's very much the case.

JASON KELLEY
You mentioned, Gay, that you've been doing this work while probably you've been doing the work a lot longer than the time you've been with the coalition and the Repair Association. We'll get to the brighter future that we want to look towards here in a second, but before we get to the, the way we want to fix things and how it'll look when we do, can you just take us back a little bit and tell us more about how we got to a place where you actually have to fight for your right to repair the things that you buy. You know, 50 years ago, I think most people would just assume that appliances and, and I don't know if you'd call them devices, but things that you purchased you could fix or you could bring to a repair shop. And now we have to force companies to let us fix things.

I know there's a lot of history there, but is there a short version of how we ended up in this place where we have to fight for this right to repair?

GAY GORDON-BYRNE
Yeah, there is a short version. It's called about 20 years ago, right after Y2K, it became possible, because of the improvements in the internet, for manufacturers to basically host a repair manual or a user guide. online and expect their customers to be able to retrieve that information for free.

Otherwise, they have to print, they have to ship. It's a cost. So it started out as a cost reduction strategy on the part of manufacturers. And at first it seemed really cool because it really solved a problem. I used to have manuals that came in like, huge desktop sets that were four feet of paper. And every month we'd get pages that we had to replace because the manual had been updated. So it was a huge savings for manufacturers, a big convenience for consumers and for businesses.

And then, no aspersions on lawyers. But my opinion is that some lawyer decided they wanted to know, they should know. For reasons we have no idea because they, they still don't make sense, that they should know who's accessing their website. So then they started requiring a login and a password, things like that.

And then another bright light, possibly a lawyer, but most likely a CFO said, we should charge people to get access to the website. And that slippery slope got really slippery or really fast. So it became obvious that you could save a lot of money by not providing manuals, not providing diagnostics and then not selling parts.

I mean, if you didn't want to sell parts, you didn't have to. There was no law that said you have to sell parts, or tools, or diagnostics. And that's where we've been for 20 years. And everybody that gets away with it has encouraged everybody else to do it. To the point where, um, I don't think Cindy would disagree with me.

I mean, I took a look, um, as did Nathan Proctor of US PIRG when we were getting ready to go before the FTC. And we said, you know, I wonder how many companies are actually selling parts and tools and manuals, and Nathan came up with a similar statistic. Roughly 90 percent of the companies don't.

JASON KELLEY
Wow.

GAY GORDON-BYRNE
So we're, face it, we have now gone from a situation where everybody could fix anything if they were really interested, to 90 percent of stuff not being fixable, and that number is going, getting worse, not better. So yeah, that's the short story, it’s been a bad 20 years.

CINDY COHN
It's funny because I think it's really, it's such a testament to people's desire to want to fix their own things that despite this, you can go on YouTube if something breaks and you can find some nice person who will walk you through how to fix, you know, lots and lots of devices that you have. And to me, that's a testament to the human desire to want to fix things and the human desire to want to teach other people how to fix things, that despite all these obstacles, there is this thriving world, YouTube's not the only place, but it's kind of the central place where you can find nice people who will help tell you how to fix your things, despite it being so hard and getting harder to have that knowledge and the information you need to do it.

GAY GORDON-BYRNE
I would also add to that there's a huge business of repair that, we're not strictly fighting for people's rights to be able to do it yourself. In fact, most people, again, you know, back to some kind of general statistics, most people, somewhere around 85 percent of them, really don't want to fix their own stuff.

They may fix some stuff, but they don't want to fix all stuff. But the options of having somebody help them have also gone. Gone just downhill, downhill, downhill massively in the last 20 years and really bad in the past 10 years. 

So the industry that current employment used to be about 3 million people in the repair, in the industry of repair and that kind of spanned auto repair and a bunch of other things. But those people don't have jobs if people can't fix their stuff because the only way they can be in business is to know that they can buy a part. To know that they can buy the tool, to know that they can get a hold of the schematic and the diagnostics. So these are the things that have thwarted business as well as, do it yourself. And I think most people, most people, especially the people I know, really expect to be able to fix their things. I think we've been told that we don't, and the reality is we do.

CINDY COHN
Yeah, I think that's right. And one of the, kind of, stories that people have been told is that, you know, if there's a silicon chip in it, you know, you just can't fix it. That that's just, um, places things beyond repair and I think that that's been a myth and I think a lot of people have always known It's a myth, you know, certainly in EFF's community.

We have a lot of hardware hackers, we even have lots of software hackers that know that the fact that there's a chip involved doesn't mean that it's a disposable item. But I wondered you know from your perspective. Have you seen that as well?

GAY GORDON-BYRNE
Oh, absolutely. People are told that these things are too sophisticated, that they're too complex, they're too small. All of these things that are not true, and you know, you got 20 years of a drumbeat of just massive marketing against repair. The budgets for people that are saying you can't fix your stuff are far greater than the budgets of the people that say you can.

So, thank you, Tim Cook and Apple, because you've made this an actual point of advocacy. Every time Apple does something dastardly, and they do it pretty often, every new release there's something dastardly in it, we get to get more people behind the, ‘hey, I want to fix my phone, goddamnit!’

CINDY COHN
Yeah, I think that's right. I think that's one of the wonderful things about the Right to Repair movement is that you're, you're surfing people's natural tendencies. The idea that you have to throw something away as soon as it breaks is just so profoundly …I think it's actually an international human, you know, desire to be able to fix these kinds of things and be able to make something that you own work for you.

So it's always been profoundly strange to have companies kind of building this throwaway culture. It reminds me a little of the privacy fights where we've had also 20 years of companies trying to convince us that your privacy doesn't matter and you don't care about it, and that the world's better if you don't have any privacy. And on a one level that has certainly succeeded in building surveillance business models. But on the other hand, I think it's profoundly against human tendencies, so those of us on the side of privacy and repair, the benefit of us is we're kind of riding with how people want to be in the kind of world they want to live in, against, you know, kind of very powerful, well funded forces who are trying to convince us we're different than we are.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Gay Gordon-Byrne.

At the top of the episode, Gay told us a story about a refrigerator that couldn’t be fixed unless a licensed technician – for a fee, obviously – was brought in to ENTER A PASSWORD. INTO A FRIDGE. Even though the person who owned the fridge had sourced the new part and installed it.

GAY GORDON-BYRNE
And that illustrates to me the damage that's being done by this concept of parts pairing, which is where only the manufacturer can make the part work. So even if you can find a part. Even if you could put it in, you can't make it work without calling the manufacturer again, which kind of violates the whole idea that you bought it and you own it, and they shouldn't have anything to do with it after that. 

So these things are pervasive. We see it in all sorts of stuff. The refrigerator one really infuriates me.

CINDY COHN
Yeah, we've seen it with printer cartridges. We've seen it with garage door openers, for sure. I recently had an espresso machine that broke and couldn't get it fixed because the company that made it doesn't make parts available for, for people and that. You know, that's a hard lesson. It's one of the things when you're buying something is to try to figure out, like, is, is this actually repairable or not?

You know, making that information available is something that our friends at Consumer Reports have done and other people have done, but it's still a little hard to find sometimes.

GAY GORDON-BYRNE
Yeah, that information gap is enormous. There are some resources. They're not great. none of them are comprehensive enough to really do the job. But there's an ‘index de repairability’ in France that covers a lot of consumer tech, you know, cell phones and laptops and things along those lines.

It's not hard to find, but it's in French, so use Google Translate or something and you'll see what they have to say. Um, that's actually had a pretty good impact on a couple companies. For example, Samsung, which had never put out a manual before, had to put out a manual, um, in order to be rated in France. So they did. The same manual they didn't put out in the U. S. and England.

CINDY COHN  
Oh my God, it’s amazing.

Music break.

CINDY COHN
So let's flip this around a little bit. What does the world look like if we get it right? What does a repairable world look like? How is it when you live in it, Gay? Give me a day in the life of somebody who's living in the fixed version of the world.

GAY GORDON-BYRNE
Well, you will be able to buy things that you can fix, or have somebody fix them for you. And one of the consequences is that you will see more repair shops back in your town.

It will be possible for some enterprising person, that'll open up. Again, the kinds of shops we used to have when we were kids.

You'll see a TV repair shop, an appliance repair shop, an electronics repair shop. In fact, it might be one repair shop, because some of these things are all being fixed in the same way. 

So  you'll see more economic activity in the area of repair. You'll also see, and this is a hope, that manufacturers, if they're going to make their products more repairable, in order to look better, you know, it's more of a, more of a PR and a marketing thing.

If they're going to compete on the basis of repairability, they're going to have to start making their products. more repairable from the get go. They're probably gonna have to stop gluing everything together. Europe has been pretty big on making sure that things are made with fasteners instead of glue.

I think we're gonna see more activity along those lines, and more use of replaceable batteries. Why should a battery be glued in? That seems like a pretty stupid thing to do. So I think we'll see some improvements along the line of sustainability in the sense that we'll be able to keep our things longer and use them until we're done with them, not to when the manufacturer decides they want to sell you a new one, which is really the cycle that we have today.

CINDY COHN
Yeah. Planned obsolescence I think is what the marketers call it. I love a vision of the world, you know, when I grew up, I grew up in a small town in Iowa and we had the, the people called the gearheads, right? They were the ones who were always tinkering with cars. And of course you could take your appliances to them and other kinds of things because, you know, people who know how to take things apart and figure out how they work tend to know that about multiple things.

So I'd love a future of the world where the kind of gearheads rise again and are around to help us keep our stuff longer and keep our stuff again.  I really appreciate what you say, like when we're done with them. I mean, I love innovation. I love new toys.

I think that's really great. But the idea that when I'm done with something, you know, it goes into a trash heap. Um, or, you know, into someplace where you have to have fancy, uh, help to make sure that you're not endangering the planet. Like, that's not a very good world.

GAY GORDON-BYRNE
Well, look at your example of your espresso machine. You weren't done with it. It quit. It quit. You can't fix it. You can't make another cup of espresso with it.

That's not what you planned. That's not what you wanted.

CINDY COHN
Yep.

JASON KELLEY
I think we all have stories like the espresso machine and that's part of why this is such a tangible topic for everyone. Maybe I'm not alone in this, but I love, you know, thrift stores and places like that where I can get something that maybe someone else was, was tired of. I was walking. Hmm. I passed a house a few years ago and someone had put, uh, a laptop that the screen had been damaged just next to the trash.

And I thought, that looks like a pretty nice laptop. And I grabbed it. It was a pretty new, like, one year old Microsoft Surface. Tablet, laptop, um, anyway, I took it to a repair shop and they were able to repair it for like way less than the cost of buying a new one and I had a new laptop essentially, um, and I don't think they gave me extra service because I worked at EFF but they were certainly happy to help because I worked at EFF, um, but then, you know, these things do eventually Sort of give up, right?

That laptop lasted me about three years and then had so many issues that I just kind of had to get rid of it Where do you think in the in the better future? We should put the things that are sort of Unfixable. You know, do we, do we bring them to a repair shop and they pull out the pieces that work like a junkyard that they can reuse?

Is there a better system for, uh, disposing of the different pieces or the different devices that we can't repair? How do you think about that more sustainable future once everything is better in the first place in terms of being able to repair things?

GAY GORDON-BYRNE
Excellent question. We have a number of members that are what we call charitable recyclers. And I think that's a model for more, rather than less. They don't even have to be gently used. They just have to be potentially useful. And they'll take them in. They will fix them. They will train people, often people that have some employment challenges, especially coming out of the criminal justice system.  And they'll train them to make repairs and they both get a skill, a marketable skill for future employment. And they also, they also turn around and then resell those devices to make money to keep the whole system going.

But in the commercial recycling business, there's a lot of value in the things that have been discarded if they can have their batteries removed before, before they are, quote, recycled, because recycling is a very messy business and it requires physical contact with the device to the point that it's shredded or crushed. And if we can intercept some of that material before it goes to the crusher, we can reuse more of that material. And I think a lot of it can be reused very effectively in downstream markets, but we don't have those markets because we can't fix the products that are broken.

CINDY COHN
Yep. There's a whole chain of good that starts happening if we can begin to start fixing things, right? It's not just the individuals get to fix the things that they get, but it sets off kind of a cycle of things, a happy cycle of things that get better all along the way.

GAY GORDON-BYRNE
Yep, and that can be, that can happen right now, well, I should say as soon as these laws start taking effect, because a lot of the information parts and tools that are required under the laws are immediately useful.

CINDY COHN
Right. So tell me, how do these laws work? What do they, what, the good ones anyway, what are, what are they doing? How are things changing with the current flock of laws that are just now coming online?

GAY GORDON-BYRNE
Well, they're all pretty much the same. They require manufacturers of things that they already repair, so there's some limitations right there, to make available on fair and reasonable terms the same parts, tools, diagnostics, and firmware that they already provide to their quote authorized or their subcontract repair providers because our original intent was to restore competition. So the bills are really a pro competition law as opposed to an e-waste law.

CINDY COHN  
Mm hmm.

GAY GORDON-BYRNE
Because these don't cover everything. They cover a lot of stuff, but not everything. California is a little bit different in that they already had a statute that required things of be, under $50 or under $100 to be covered for three years. They have some dates in there that expand the effectiveness of the bill into products that don't even have repair options today.

But the bills that we've been promoting are a little softer, because the intent is competition, because we want to see what competition can do, when we unlock competition, what that does for consumers.

CINDY COHN  
Yeah, and I think that that dovetails nicely into something EFF has been working on quite a while now, which is interoperability, right? One of the things that unlocks competition is, you know, requiring people to build their tools and services in a way that are interoperable with others, that helps both with repair and with kind of follow on innovation that, you know, you can switch up how your Facebook feed shows up based on what you want to see rather than, you know, based upon what Facebook's algorithm wants you to see or other kinds of changes like that. And how do you see interoperability fitting into all of this?

GAY GORDON-BYRNE
I think there will be more. It's not specific to the law, but I think it will simply happen as people try to comply with the law. 

Music break

CINDY COHN  
You founded the Repair Association, so tell us a little bit about how that got started and how you decided to dedicate your life to this. I think it's really important for us to think about, like, the people that are needed to build a better world, as well as the, you know, kind of technologies and ideas.

GAY GORDON-BYRNE
I was always in the computer industry. I grew up with my father who was a computer architect in the 50s and 60s. So I never knew a world that didn't involve computers. It was what dad did. And then when I needed a job out of college, and having bounced around a little bit and found not a great deal of success, my father encouraged me to take a job selling computers, because that was the one thing he had never done and thought that it was missing from his resume.

And I took to it like, uh, I don't know, fish to water? I loved it. I had a wonderful time and a wonderful career. But by the mid 2000s, I was done. I mean, I was like, I can't stand this job anymore. So I decided to retire. I didn't like being retired. I started doing other things and eventually, I started doing some work with a group of companies that repair large mainframes.

I've known them. I mean, my former boss was the president. It was kind of a natural. And they started having trouble with some of the manufacturers and I said, that's wrong. I mean, I had this sense of indignation that what Oracle had done when they bought Sun was just flatly wrong and it was illegal. And I volunteered to join a committee. And that's when, haha, that's when I got involved and it was basically, I tell people I over-volunteered.

CINDY COHN
Yeah.

GAY GORDON-BYRNE
And what happened is that because I was the only person in that organization that didn't already have relationships with manufacturers, that they couldn't, they couldn't bite the hand that fed them, I was elected chief snowball thrower. AKA Executive Director. 

So it was a passion project that I could afford to do because otherwise I was going to stay home and knit. So this is way better than knitting or quilting these days, way more fun, way more gratifying. I've had a truly wonderful experience, met so many fabulous people, have a great sense of impact that I would never have had with quilting.

CINDY COHN
I just love the story of somebody who kind of put a toe in and then realized, Oh my God, this is so important. And ‘I found this thing where I can make the world better.’ And then you just get, you know, kind of, you get sucked in and, um, but it's, it's fun. And what I really appreciate about the Repair Association and the Right to Repair people is that while, you know, they're working with very serious things, they also, you know, there's a lot of fun in making the world a better place.

And it's kind of fun to be involved in the Right to Repair right now because after a long time kind of shouting in the darkness, there's some traction starting to happen. So then the fun gets even more fun.

GAY GORDON-BYRNE
I can tell you it's ... We're so surprised. I mean, it took, we've had over, well, well over 100 bills filed and, you know, every year we get a little further. We get past this committee and this hurdle and this hurdle and this hurdle. We get almost to the end and then something would happen. And to finally get to the end where the bill becomes law? It's like the dog that chases the car, and you go, we caught the car, now what?

CINDY COHN
Yeah. Now you get to fix it! The car!

JASON KELLEY
Yeah, now you can repair the car.

MUSIC TRANSITION

JASON KELLEY
That was such a wonderful, optimistic conversation and not the first one we've had this season. But this one is interesting because we're actually already getting where we want to be. We're already building the future that we want to live in and it's just really, really pleasing to be able to talk to someone who's in the middle of that and, and making sure that that work happens.

CINDY COHN
I mean, one of the things that really struck me is how much of the better future that we're building together is really about creating new jobs and new opportunities for people to work. I think there's a lot of fear right now in our community that the future isn't going to have work, and that without a social safety net or other kinds of things, you know, it's really going to hurt people.

And I so appreciated hearing about how, you know, Main Street's going to have more jobs. There's going to be people in your local community who can fix your things locally because devices, those are things where having a local repair community and businesses is really. helpful to people.

And so I also kind of, the flip side of that is this interesting observation that one of the things that's happened as a result of shutting off the Right to Repair is an increasing centralization, um, that the jobs that are happening in this thing are not happening locally and that by unlocking the right to repair, we're going to unlock some local opportunities for economic things.

I mean, You know, EFF thinks about this both in terms of empowering users, but also in terms of competition. And the thing about Right to Repair is it really does unlock kind of hyper local competition.

JASON KELLEY
I hadn't really thought about how specifically local it is to have a repair shop that you can just bring your device to. And right now it feels like the options are if you live near an Apple store, for example, maybe you can bring your phone there and then they send it somewhere. I'd much rather go to someone, you know, in my town that I can talk to, and who can tell me about what needs to be done. That's such a benefit of this movement that a lot of people aren't even really putting on the forefront, but it really is something that will help people actually get work and, and, and help the people who need the work and the people who need the job done.

CINDY COHN
Another thing that I really appreciate about the Right to Repair movement s how universal it is. Everyone experiences some version of this, you know, from the refrigerator story to my espresso machine, to any of any number of other stories to the farmers, like everyone has some version of how.

This needs to be fixed. And the other thing that I really appreciate about her gay stories about the right to repair movement is that, you know, she's somebody who comes out of computers, and was thinking about this from the context of computers and didn't really realize that farmers were having the same problem.

Of course, we all kind of know analytically that a lot of the movement in a lot of industries is towards, you know, centralizing computers and making, you know. You know, tractors are now computers with gigantic wheels. Cars are now computers with smaller wheels. That computers have become central to these kinds of things, but also realization that we have silos of users who are experiencing a version of the same problem and connecting those silent silos together, let me say that again. I think the realization that we have silos of users who are experiencing the same problem depending on what kind of tool they're using, um, and connecting those silos together so that together we stand as a much bigger voice is something that the repair, um, the Right to Repair folks have really done well and it is a, is a good lesson for the rest of us.

JASON KELLEY
Yeah, I think we talked a little bit with Adam Savage when he was on a while ago about this sort of gatekeeping and how effective it is to remove the gatekeepers from these movements and say, you know, we're all fighting the same fight. And it just goes to show you that it actually works. I mean, not only does it get everybody on the same page, but unlike a lot of movements, I think you can really see the impact that the Right to Repair movement has had. 

And we talked with Gay about this and it's just, it really, I think, should make people come away optimistic that advocacy like this works over time. You know, it's not a sprint, it's a marathon, and we have actually crested a sort of hill in some ways.

There's a lot of work to be done, but it's, it's actually work that we probably will be able to get done and, and that we're seeing the benefits of today

CINDY COHN
Yeah. And as we start to see benefits, we're going to start to see more benefits. I appreciate her. We're in, you know, we're in the whole plugging period where, you know, we got something passed and we need to plug the holes. But I also think once people start feeling the power of having the Right to Repair again, I think I hope it will help snowball.

One of the things that she said that I have observed as well is that sometimes it feels like nothing's happening, nothing's happening, nothing's happening, and then all of a sudden it's all happening. And I think that that's one of the, the kind of flows of advocacy work that I've observed over time and it's fun to see the, the Right to Repair Coalition kind of getting to experience that wave, even if it can be a little overwhelming sometimes.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard …Come Inside by Zep Hurme featuring snowflake and Drops of H2O ( The Filtered Water Treatment ) by J.Lang featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast. 

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

What Does EFF Mean to You?

We could go on for days talking about all the work EFF does to ensure that technology supports freedom, justice, and innovation for all people of the world. In fact, we DO go on for days talking about it — but we’d rather hear from you. 

What does EFF mean to you? We’d love to know why you support us, how you see our mission, or what issue or area we address that affects your life the most. It’ll help us make sure we keep on being the EFF you want us to be.

So if you’re willing to go on the record, please send us a few sentences, along with your first name and current city of residence, to testimonials@eff.org; we’ll pick some every now and then to share with the world here on our blog, in our emails, and on our social media.

Podcast Episode: Antitrust/Pro-Internet

Imagine an internet in which economic power is more broadly distributed, so that more people can build and maintain small businesses online to make good livings. In this world, the behavioral advertising that has made the internet into a giant surveillance tool would be banned, so people could share more equally in the riches without surrendering their privacy.

play
Privacy info. This embed will serve content from simplecast.com

 

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

That’s the world Tim Wu envisions as he teaches and shapes policy on the revitalization of American antitrust law and the growing power of big tech platforms. He joins EFF’s Cindy Cohn and Jason Kelley to discuss using the law to counterbalance the market’s worst instincts, in order to create an internet focused more on improving people’s lives than on meaningless revenue generation. 

In this episode you’ll learn about: 

  • Getting a better “deal” in trading some of your data for connectedness. 
  • Building corporate structures that do a better job of balancing the public good with private profits. 
  • Creating a healthier online ecosystem with corporate “quarantines” to prevent a handful of gigantic companies from dominating the entire internet. 
  • Nurturing actual innovation of products and services online, not just newer price models. 

Timothy Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School, where he has served on the faculty since 2006. First known for coining the term “net neutrality” in 2002, he served in President Joe Biden’s White House as special assistant to the President for technology and competition policy from 2021 to 2023; he also had worked on competition policy for the National Economic Council during the last year of President Barack Obama’s administration. Earlier, he worked in antitrust enforcement at the Federal Trade Commission and served as enforcement counsel in the New York Attorney General’s Office. His books include “The Curse of Bigness: Antitrust in the New Gilded Age” (2018), "The Attention Merchants: The Epic Scramble to Get Inside Our Heads” (2016), “The Master Switch: The Rise and Fall of Information Empires” (2010), and “Who Controls the Internet? Illusions of a Borderless World” (2006).

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

TIM WU
I think with advertising we need a better deal. So advertising is always a deal. You trade your attention and you trade probably some data, in exchange you get exposed to advertising and in exchange you get some kind of free product.

You know, that's the deal with television, that's been the deal for a long time with radio. But because it's sort of an invisible bargain, it's hard to make the bargain, and the price can be increased in ways that you don't necessarily notice. For example, we had one deal with Google in, let's say, around the year 2010 - if you go on Google now, it's an entirely different bargain.

It's as if there's been a massive inflation in these so-called free products. In terms of how much data has been taken, in terms of how much you're exposed to, how much ad load you get. It's as if sneakers went from 30 dollars to 1,000 dollars!

CINDY COHN
That's Tim Wu – author, law professor, White House advisor. He’s something of a swiss army knife for technology law and policy. He spent two years on the National Economic Council, working with the Biden administration as an advisor on competition and tech policy. He worked on antitrust legislation to try and check some of the country’s biggest corporations, especially, of course, the tech giants.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast, How to Fix the Internet. Our guest today is Tim Wu. His stint with the Biden administration was the second White House administration he advised. And in between, he ran for statewide office in New York. And that whole thing is just a sideline from his day job as a law professor at Columbia University. Plus, he coined the term net neutrality!

CINDY COHN
On top of that, Tim basically writes a book every few years that I read in order to tell me what's going to happen next in technology. And before that he's been a programmer and a more traditional lab based scientist. So he's kind of got it all.

TIM WU
Sounds like I'm a dilettante.

CINDY COHN
Well, I think you've got a lot of skills in a lot of different departments, and I think that in some ways, I've heard you call yourself a translator, and I think that that's really what all of that experience gives you as a superpower is the ability to kind of talk between these kinds of spaces in the rest of the world.

TIM WU
Well, I guess you could say that. I've always been inspired by Wilhelm Humboldt, who had this theory that in order to have a full life, you had to try to do a lot of different stuff. So somehow that factors into it somewhere.

CINDY COHN
That's wonderful. We want to talk about a lot of things in this conversation, but I kind of wanted to start off with the central story of the podcast, which is, what does the world look like if we get this right? You know, you and I have spent a lot of years talking about all the problems, trying to lift up obstacles and get rid of obstacles.

But if we reach this end state where we get a lot of these problems right, in Tim Wu's world, what, what does it look like? Like, what does your day look like? What do people's experience of technology look like?

TIM WU
I think it looks like a world in which economic power surrounding the internet and surrounding the platforms is very much more distributed. And, you know, what that means practically is it means a lot of people are able to make a good living, I guess, based on being a small producer or having a service based skill in a way that feels sustainable and where the sort of riches of the Internet are more broadly shared.

So that's less about what kind of things you click on or, you know, what kind of apps you use and more about, I guess, the economic structure surrounding the Internet, which I think, you know, um, I don't think I'm the only person who thinks this, you know, the structure could be fairer and could work for more people.

It does feel like the potential and, you know, we've all lived through that potential starting in the 90s of this kind of economically liberating force that would be the basis for a lot of people to make a decent living has seemed to turn into something more where a lot of money aggregates in a few places.

CINDY COHN
Yeah, I remember, people still talk about the long tail, right, as a way in which the digitization of materials created a revenue stream that's more than just, you know, the flavor of the week that a movie studio or a book publisher might want us to pay attention to on kind of the cultural side, right?

That there was space for this. And that also makes me think of a conversation we just had with the folks in the right to repair movement talking about like their world includes a place where there's mom and pop shops that will help you fix your devices all over the place. Like this is another way in which we have centralized economic power.

We've centralized power and if we decentralize this or, or, or spread it more broadly, uh, we're going to create a lot of jobs and opportunities for people, not just as users of technology, but as the people who help build and offer it to us.

TIM WU
I'm writing a new book, um, working title, Platform Capitalism, that has caused me to go back and look at the, you know, the early promise of the internet. And I went back and I was struck by a book, some of you may remember, called "An Army of Davids," by Glenn Reynolds the Instapundit.
Yeah, and he wrote a book and he said, you know, the future of the American economy is going to be all these kind of mom and pop sellers who, who take over everything – he wrote this about 2006 – and he says, you know, bloggers are already competing with news operations, small sellers on eBay are already competing with retail stores, and so on, journalists, so on down the line that, uh, you know, the age of the big, centralized Goliath is over and the little guys are going to rule the future.

Kind of dovetailed, I went back and read Yochai Benkler's early work about a production commons model and how, you know, there'll be a new node of production. Those books have not aged all that well. In fact, I think the book that wins is Blitzscaling. That somewhere along the line, instead of the internet favoring small business, small production, things went in the exact opposite direction.

And when I think about Yochai Benkler's idea of sort of production-based commons, you know, Waze was like that, the mapping program, until one day Waze was just bought by Google. So, I was just thinking about those as I was writing that chapter of the book.

CINDY COHN
Yeah, I think that's right. I think that identifying and, and you've done a lot of work on this, identify the way in which we started with this promise and we ended up in this other place can help us figure out, and Cory Doctorow, our colleague and friend has been doing a lot of work on this with choke point capitalism and other work that he's done for EFF and elsewhere.

And I also agree with him that, like, we don't really want to create the good old days. We want to create the good new days, right? Like, we want to experience the benefits of an Internet post-1990s, but also have those, those riches decentralized or shared a little more broadly, or a lot more broadly, honestly.

TIM WU
Yeah, I think that's right, and so I think part of what I'm saying, you know, what would fix the internet, or what would make it something that people feel excited about. You know, I think people are always excited about apps and videos, but also people are excited about their livelihood and making money.

And if we can figure out the kind of structure that makes capitalism more distributed surrounding platforms, you know, it's not abandoning the idea of you have to have a good site or a product or something to, to gain customers. It's not a total surrender of that idea, but a return to that idea working for more people.

CINDY COHN
I mean, one of the things that you taught me in the early days is how kind of ‘twas ever so, right? If you think about radio or broadcast medium or other previous mediums, they kind of started out with this promise of a broader impact and broader empowerment and, and didn't end up that way as much as well.

And I know that's something you've thought about a lot.

TIM WU
Yeah, the first book I wrote by myself, The Master Switch, had that theme and at the time when I wrote it, um, I wrote a lot of it in the, ‘09, ‘08, ‘07 kind of period, and I think at that point I had more optimism that the internet could hold out, that it wouldn't be subject to the sort of monopolizing tendencies that had taken over the radio, which originally was thousands of radio stations, or the telephone system – which started as this ‘go west young man and start your own telephone company’ kind of technology – film industry and and many others. I was firmly of the view that things would be different. Um, I think I thought that, uh, because of the CCP IP protocol, because of the platforms like HTML that were, you know, the center of the web, because of net neutrality, lasting influence. But frankly, I was wrong. I was wrong, at least when I was writing the book.

JASON KELLEY
As you've been talking about the sort of almost inevitable funneling of the power that these technologies have into a single or, or a few small platforms or companies, I wonder what you think about newer ideas around decentralization that have sort of started over the last few years, in particular with platforms like Mastodon or something like that, these kinds of APIs or protocols, not platforms, that idea. Do you see any promise in that sort of thing? Because we see some, but I'm wondering what you think.

TIM WU
I do see some promise. I think that In some ways, it's a long overdue effort. I mean, it's not the first. I can't say it's the first. Um, and part of me wishes that we had been, you know, the idealistic people. Even the idealistic people at some of these companies, such as they were, had been a bit more careful about their design in the first place.

You know, I guess what I would hope … the problem with Mastodon on some of these is they're trying to compete with entities that already are operating with all the full benefits of scale and which are already tied to sort of a Delaware private corporate model. Uh, now this is a little bit, I'm not saying that hindsight is 20/20, but when I think about the major platforms and entities the early 21st century, it's really only Wikipedia that got it right in my view by structurally insulating themselves from certain forces and temptations.

So I guess what I'm trying to say is that, uh, part of me wishes we'd done more of this earlier. I do think there's hope in them. I think it's very challenging in current economics to succeed. And sometimes you'd have to wonder if you go in a different, you know, that it might be, I don't want to say impossible, very challenging when you're competing with existing structures. And if you're starting something new, you should start it right.
That said, AI started in a way structurally different and we've seen how that's gone recently.

CINDY COHN
Oh, say more, say more!

JASON KELLEY
Yeah. Yeah. Keep, keep talking about AI.

CINDY COHN
I'm very curious about your thinking about that.

TIM WU
Well, you know, I said that, The Holy Roman Empire was neither holy, nor Roman, nor an empire. And OpenAI is now no longer open, nor non-profit, nor anything else. You know, it's kind of, uh, been extraordinary that the circuit breakers they tried to install have just been blown straight through. Um, and I think there's been a lot of negative coverage of the board. Um, because, you know, the business press is kind of narrow on these topics. But, um, you know, OpenAI, I guess, at some point, tried to structure itself more carefully and, um, and, uh, you know, now the board is run by people whose main experience has been, um, uh, taking good organizations and making them worse, like Quora, so, yeah, I, I, that is not exactly an inspiring story, uh, I guess of OpenAI in the sense of it's trying to structure itself a little differently and, and it, uh, failing to hold.

CINDY COHN
I mean, I think Mozilla has managed to have a structure that has a, you know, kind of complicated for profit/not-for-profit strategy that has worked a little better, but II hear you. I think that if you do a power analysis, right, you know, a nonprofit is going to have a very hard time up against all the money in the world.

And I think that that seems to be what happened for OpenAI. Uh, once all the money in the world showed up, it was pretty hard to, uh, actually impossible for the public interest nonprofit side to hold sway.

TIM WU
When I think about it over and over, I think engineers and the people who set up these, uh, structures have been repeatedly very naive about, um, the power of their own good intentions. And I agree. Mozilla is a good example. Wikipedia is a good example. Google, I remember when they IPO'd, they had some set up, and they said, ‘We're not going to be an ordinary company,’ or something like that. And they sort of had preferred stock for some of the owners. You know, Google is still in some ways an impressive company, but it's hard to differentiate them from any other slightly money grubbing, non-innovative colossus, um, of the kind they were determined not to become.

And, you know, there was this like, well, it's not going to be us, because we're different. You know, we're young and idealistic, and why would we want to become, I don't know, like Xerox or IBM, but like all of us, you begin by saying, I'm never going to become like my parents, and then next thing you know, you're yelling at your kids or whatever.

CINDY COHN
Yeah, it's, it's the, you know, meet the new boss the same as the old boss, right? When we, what we were hoping was that we would be free of some of the old bosses and have a different way to approach, but, but the forces are pretty powerful that stick people back in line, I think.

TIM WU
And some of the old structures, you know, look a little better. Like, I'm not going to say newspapers are perfect, but a structure like the New York Times structure, for example, basically is better than Google's. And I just think there was this sense that, Well, we can solve that problem with code and good vibes. And that turned out to be the great mistake.

CINDY COHN
One of the conversations that you and I have had over the years is kind of the role of regulation on, on the internet. I think the fight about whether to regulate or not to regulate the Internet was always a little beside the point. The question is how. And I'm wondering what you're thinking now. You've been in the government a couple times. You've tried to push some things that were pretty regulatory. How are you thinking now about something like a centralized regulatory agency or another approach to, you know, regulating the Internet?

TIM WU
Yeah, I, you know, I continue to have mixed feelings about something like the central internet commission, mostly for some of the reasons you said, but on the other hand, sometimes, if I want to achieve what I mentioned, which is the idea of platforms that are an input into a lot of people being able to operate on top of them and run businesses-like, you know, at times, the roads have been, or the electric system, or the phone network, um, it's hard to get away from the idea of having some hard rules, sometimes I think my sort of platonic form of, of government regulation or rules was the 1956 AT&T consent decree, which, for those who are not as deep in those weeds as I am, told AT&T that it could do nothing but telecom, and therefore not do computing and also force them to license every single one of their patents for free. And the impact of that was more than one -  one is because they were out of computing. They were not able to dominate it and you had companies then new to computing like IBM and others that got into that space and developed the American computing industry completely separate from AT&T.

And you also ended up, semiconductor companies start that time with the transistor patent and other patents they used for free. So you know, I don't know exactly how you achieve that, but I'm drawn to basically keeping the main platforms in their lane. I would like there to be more competition.
The antitrust side of me would love it. And I think that in some areas we are starting to have it, like in social media, for better or for worse. But maybe for some of the more basic fundamentals, online markets and, you know, as much competition as we can get – but some rule to stay out of other businesses, some rule to stop eating the ecosystem. I do think we need some kind of structural separation rules. Who runs those is a little bit of a harder question.

CINDY COHN
Yeah, we're not opposed to structural separation at EFF. I think we, we think a lot more about interoperability to start with as a way to, you know, help people have other choices, but we haven't been opposed to structural separation, and I think there are situations in which it might make a lot of good sense, especially, you know, in the context of mergers, right?

Where the company has actually swallowed another company that did another thing. That's, kind of the low hanging fruit, and EFF has participated a lot in commenting on potential mergers.

TIM WU
I'm not opposed the idea of pushing interoperability. I think that it's based on the experience of the last 100 years. It is a tricky thing to get right. I'm not saying it's impossible. We do have examples: Phone network, in the early 20th century, and interconnection was relatively successful. And right now, you know, when you change between, let's say, T-Mobile and Verizon, there's only three left, but you get to take your phone number with you, which is a form of interoperability.

But it has the risk of being something you put a lot of effort into and it not necessarily working that well in terms of actually stimulating competition, particularly because of the problem of sabotage, as we saw in the ‘96 Act. So it's actually not about the theory, it's about the practice, the legal engineering of it. Can you find the right thing where you've got kind of a cut point where you could have a good interoperability scheme?

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Tim Wu. I was intrigued by what he said about keeping platforms in their lane. I wanted to hear him speak more about how that relates to antitrust – is that spreading into other ecosystems what sets his antitrust alarm bells off? How does he think about that?

TIM WU
I guess the phrase I might use is quarantine, is you want to quarantine businesses, I guess, from others. And it's less of a traditional antitrust kind of remedy, although it, obviously, in the ‘56 consent decree, which was out of an antitrust suit against AT&T, it can be a remedy.

And the basic idea of it is, it's explicitly distributional in its ideas. It wants more players in the ecosystem, in the economy. It's almost like an ecosystem promoting a device, which is you say, okay, you know, you are the unquestioned master of this particular area of commerce. Maybe we're talking about Amazon and it's online shopping and other forms of e-commerce, or Google and search.

We're not going to give up on the hope of competition, but we think that in terms of having a more distributed economy where more people have their say, um, almost in the way that you might insulate the college students from the elementary school students or something. We're going to give other, you know, room for other people to develop their own industries in these side markets. Now, you know, there's resistance say, well, okay, but Google is going to do a better job in, uh, I don't know, shopping or something, you know, they might do a good job. They might not, but you know, they've got their returns and they're always going to be an advantage as a platform owner and also as a monopoly owner of having the ability to cross-subsidize and the ability to help themselves.

So I think you get healthier ecosystems with quarantines. That's basically my instinct. And, you know, we do quarantines either legally or de facto all the time. As I said, the phone network has long been barred from being involved in a lot of businesses. Banking is kept out of a lot of businesses because of obvious problems of corruption. The electric network, I guess they could make toasters if they want, but it was never set up to allow them to dominate the appliance markets.

And, you know, if they did dominate the appliance markets, I think it would be a much poorer world, a lot less interesting innovation, and frankly, a lot less wealth for everyone. So, yeah, I have strong feelings. It's more of my net neutrality side that drives this thinking than my antitrust side, I’ll put it that way.

JASON KELLEY
You specifically worked in both the Obama and Biden administration sort of on these issues. I'm wondering if your thinking on this has changed. In experiencing those things from from the sort of White House perspective and also just how different those two, sort of, experiences were, obviously the moments are different in time and and and everything like that, but they're not so far apart – maybe light years in terms of technology, but what was your sort of experience between those two, and how do you think we're doing now on this issue?

TIM WU
I want to go back to a slightly earlier time in government, not the Obama, actually it was the Obama administration, but my first job in the, okay, sorry, my third job in the federal government, uh, I guess I'm a, one of these recidivists or something, was at the Federal Trade Commission.

CINDY COHN
Oh yeah, I remember.

TIM WU
Taking the first hard look at big tech and, in fact, we're investigating Google for the first time for antitrust possible offenses, and we also did the first privacy remedy on Facebook, which I will concede was a complete and absolute failure of government, one of the weakest remedies, I think. We did that right before Cambridge Analytica. And obviously had no effect on Facebook's conduct at all. So, one of the failed remedies. I think that when I think back about that period, the main difference was that the tech platforms were different in a lot of ways.

I believe that, uh, monopolies and big companies have, have a life cycle. And they were relatively early in that life cycle, maybe even in a golden age. A company like Amazon seemed to be making life possible for a lot of sellers. Google was still in its early phase and didn't have a huge number of verticals. Still had limited advertising. Most searches still didn't turn up that many ads.

You know, they were in a different stage of their life. And they also still felt somewhat, they were still already big companies. They still felt relatively in some sense, vulnerable to even more powerful economic forces. So they hadn't sort of reached that maturity. You know, 10 years later, I think the life cycle has turned. I think companies have largely abandoned innovation in their core products and turned to defense and trying to improve – most of their innovations are attempting to raise more revenue and supposed to make the product better. Uh, kind of reminds me of the airline industry, which stopped innovating somewhere in the seventies and started making, trying to innovate in, um, terms of price structures and seats being smaller, that kind of thing.

You know, there's, you reach this end point, I think the airlines are the end point where you take a high tech industry at one point and just completely give up on anything other than trying to innovate in terms of your pricing models.

CINDY COHN
Yeah, I mean, I, you know, our, our, we, Cory keeps coming up, but of course Cory calls it the “enshittification” of, uh, of services, and I think that is, uh, in typical Corrie way captures, this stage of the process.

TIM WU
Yeah, I just to speak more broadly. I you know, I think there's a lot of faith and belief that the, uh, company like Google, you know, in its heart meant well, and I do still think the people working there mean well, but I feel that, you know, the structure they set up, which requires showing increasing revenue and profit every quarter began to catch up with it much more and we’re at a much later stage of the process.

CINDY COHN
Yep.

TIM WU
Or the life cycle. I guess I'd put it.

CINDY COHN
And then for you, kind of coming in as a government actor on this, like, what did that mean in terms of, like, was it, I'm assuming, I kind of want to finish the sentence for you. And that, you know, that meant it was harder to get them to do the right thing. It meant that their defenses were better against trying to do the right thing.

Like how did that impact the governmental interventions that you were trying to help make happen?

TIM WU
I think it was both. I think there was both, in terms of government action, a sense that the record was very different. The Google story in 2012 is very different than 2023. And the main difference is in 2023 Google is paying out 26.3 billion a year to other companies to keep its search engine where it is, and arguably to split the market with Apple.

You know, there wasn't that kind of record back in 2012. Maybe we still should have acted, but there wasn't that much money being so obviously spent on pure defensive monopoly. But also people were less willing. They thought the companies were great. They overall, I mean, there's a broader ideological change that people still felt, many people from the Clinton administration felt the government was the problem. Private industry was the solution. Had kind of a sort of magical thinking about the ability of this industry to be different in some fundamental way.

So the chair of the FCC wasn't willing to pull the trigger. The economists all said it was a terrible idea. You know, they failed to block over a thousand mergers that big tech did during that period, which it's, I think, very low odds that none of those thousands were anti-competitive or in the aggregate that maybe, you know, that was a way of building up market power.

Um, it did enrich a lot of small company people, but I, I think people at companies like Waze really regret selling out and, you know, end up not really building anything of their own but becoming a tiny sub-post of the Google empire.

CINDY COHN
Yeah, the “acquihire” thing is very central now and what I hear from people in the industry is that like, if that's not your strategy to get acquired by one of the ones, it's very hard to get funded, right? It feeds back into the VC and how you get funded to get something built.

If it's not something that one of the big guys is going to buy, you're going to have a hard time building it and you're going to have a hard time getting the support to get to the place where you might actually even be able to compete with them.

TIM WU
And I think sometimes people forget we had different models. You know, some of your listeners might forget that, you know, in the ‘70s, ‘80s, and ‘90s, and early 2000s, people did build companies not just to be bought...

CINDY COHN
Right.

TIM WU
...but to build fortunes, or because they thought it was a good company. I mean, the people who built Sun, or Apple, or, you know, Microsoft, they weren't saying, well, I hope I'm gonna be bought by IBM one day. And they made real fortunes. I mean, look, being acquired, you can obviously become a very wealthy person, but you don't become a person of significance. You can go fund a charity or something, but you haven't really done something with your life.

CINDY COHN
I'm going to flip it around again. And so we get to the place where the Tim Wu vision that the power is spread more broadly. We've got lots of little businesses all around. We've got many choices for consumers. What else, what else do you see in this world? Like what role does the advertising business model play in this kind of a better future. That's just one example there of many, that we could give.

TIM WU
Yeah, no, I like your vision of a different future. I think, uh, just like focus on it goes back to the sense of opportunity and, you know, you could have a life where you run a small business that's on the internet that is a respectable business and you're neither a billionaire nor you're impoverished, but you know, you just had to have your own business the way people have, like, in New York or used to run like stores and in other parts of the country, and in that world, I mean, in my ideal world, there is advertising, but advertising is primarily informational, if that makes sense.

It provides useful information. And it's a long way to go between here and there, but where, um, you know, it's not the default business model for informational sources such that it, it has much less corrupting effects. Um, you know, I think that advertising obviously everyone's business model is going to affect them, but advertising has some of the more, corrupting business models around.

So, in my ideal world, we would not, it's not that advertising will go away, people want information, but we'd strike a better bargain. Exactly how you do that. I guess more competition helps, you know, lower advertising, um, sites you might frequent, better privacy protecting sites, but, you know, also passing privacy legislation might help too.

CINDY COHN
I think that’s right, I think EFF has taken a position that we think we should ban behavioral ads. That's a pretty strong position for us and not what we normally do, um, to, to say, well, we need to ban something. But also that we need, of course, comprehensive privacy law, which is, you know, kind of underlines so many of the harms that we're seeing online right now is this, this lack of a baseline privacy protection.

I don't know if you see it the same way, but it's certainly it seems to be the through line for a lot of harms that are coming up as things people are concerned about. Yeah.

TIM WU
I mean, absolutely, and I, you know, don't want to give EFF advice on their views, but I would say that I think it's wise to see the totally unregulated collection of data from, you know, millions, if not billions of people as a source of so many of the problems that we have.

It drives unhealthy business models, it leads to real-world consequences, in terms of identity theft and, and so many others, but I think I, I'd focus first on what, yeah, the kind of behavior that encourages the kind of business model is encourages, which are ones that just don't in the aggregate, feel very good for the businesses or for, for us in particular.

So yeah, my first priority legislatively, I think if I were acting at this moment would be starting right there with, um, a privacy law that is not just something that gives supposed user rights to take a look at the data that's collected, but that meaningfully stops the collection of data. And I think we'll all just shrug our shoulders and say, oh, we're better off without that. Yes, it supported some, but we will still have some of the things – it's not as if we didn't have friends before Facebook.

It's not as if we didn't have video content before YouTube, you know, these things will survive with less without behavioral advertising. I think your stance on this is entirely, uh, correct.

CINDY COHN
Great. Thank you, I always love it when Tim agrees with me and you know, it pains me when we disagree, but one of the things I know is that you are one of the people who was inspired by Larry Lessig and we cite Larry a lot on the show because we like to think about things or organize them in terms of the four levels of, um, You know, digital regulation, you know, laws, norms, markets, and code as four ways that we could control things online. And I know you've been focusing a lot on laws lately and markets as well.

How do you think about, you know, these four levers and where we are and, and how we should be deploying them?

TIM WU
Good question. I regard Larry as a prophet. He was my mentor in law school, and in fact, he is responsible for most of my life direction. Larry saw that there was a force arising through code that already was somewhat, in that time, 90s, early 2000s, not particularly subject to any kind of accountability, and he saw that it could take forms that might not be consistent with the kind of liberties you would like to have or expect and he was right about that.

You know, you can say whatever you want about law or government and there are many examples of terrible government, but at least the United States Constitution we think well, there is this problem called tyranny and we need to do something about it.

There's no real equivalent for the development of abusive technologies unless you get government to do something about it and government hasn't done much about it. You know, I think the interactions are what interests me about the four forces. So if we agree that code has a certain kind of sovereignty over our lives in many ways and most of us on a day-to-day basis are probably more affected by the code of the devices we use than by the laws we operate under.

And the question is, what controls code? And the two main contenders are the market and law. And right now the winner by far is just the market, which has led codemakers in directions that even they find kind of unfortunate and disgraceful.

I don't remember who had that quote, but it was some Facebook engineer that said the greatest minds of our generation are writing code to try to have people click on random ads, and we have sort of wasted a generation of talent on meaningless revenue generation when they could be building things that make people's lives better.

So, you know, the answer is not easy is to use law to counter the market. And that's where I think we are with Larry's four factors.

CINDY COHN
Yeah, I think that that's right, and I agree that it's a little ro-sham-bo, right, that you can control code with laws and, and markets and you can control markets with code, which is kind of where interoperability comes in sometimes and laws and you know, norms play a role in kind of a slightly different whammy role in all of these things, but I do think that those interactions are really important and we've, again, I've always thought it was a somewhat phony conversation about, you know, "to regulate or not to regulate, that is the question" because that's not actually particularly useful in terms of thinking about things because we were embedded in a set of laws. It's just the ones we pay attention to and the ones that we might not notice, but I do think we're in a time when we have to think a lot harder about how to make laws that will be flexible enough to empower people and empower competition and not lock in the winners of today's markets. And we spend a lot of time thinking about that issue.

TIM WU
Well, let me say this much. This might sound a little contradictory in my life story, but I'm not actually a fan of big government, certainly not overly prescriptive government. Having been in government, I see government's limits, and they are real. But I do think the people together are powerful.

I think laws can be powerful, but what they most usefully do is balance out the market. You know what I'm saying? And create different incentives or different forces against it. I think trying to have government decide exactly how tech should run is usually a terrible idea. But to cut off incentives – you talked about behavioral advertising. So let's say you ban behavioral advertising just the way we ban child labor or something. You know, you can live without it. And, yeah, maybe we're less productive because we don't let 12 year olds work in factories. There's a marginal loss of revenue, but I frankly think it's worth it.

And, you know, and some of the other practices that have shown up are in some ways the equivalent. And we can live without them. And that's the, you know, it's sort of easy to say. we should ban child labor. But when you look for those kind of practices, that's where we need law to be active.

JASON KELLEY
Well, Cindy, I came away from that with a reading list. I'm sure a lot of people are familiar with those authors and those books, but I am going to have to catch up. I think we'll put some of them, maybe all the books, in the, in the show notes so that people who are wondering can, can catch up on their end.

You, as someone who's already read all those books, probably have different takeaways from this conversation than me.

CINDY COHN
You know what I really, I really like how Tim thinks he's, you know, he comes out of this, especially most recently from an economics perspective. So his future is really an economics one.

It's about an internet that has lots of spaces for people to make a reasonable living as opposed to the few people make a killing, or sell their companies to the big tech giants. And I think that that vision dovetails a lot with a lot of the people that we've talked. to on this show that, you know, in some ways we've got to think about how do we redistribute the internet and that includes redistributing the economic benefits.

JASON KELLEY
Yeah. And thinking about, you know, something you've said many times, which is this idea of rather than going backwards to the internet we used to have, or the world we used to have, we're really trying to build a better world with the one we do have.

So another thing he did mention that I really pulled away from this conversation was when antitrust makes sense. And that sort of idea of, well, what do you do when companies start spreading into other ecosystems? That's when you really have to start thinking about the problems that they're creating for competition.

And I think the word he used was quarantine. Is that right?

CINDY COHN
Yeah I love that image.

JASON KELLEY
Yeah, that was just a helpful, I think, way for people to think about how antitrust can work. And that was something that I'll take away from this probably forever.

CINDY COHN
Yeah, I also liked his vision of what kind of deal we have with a lot of these free tools or AKA free tools, which is, you know, at one time when we signed up for, you know, a Gmail account, it's, you know, the, the deal was that it was going to look at what you searched on and what you wrote and then place you ads based on the context and what you did.

And now that deal is much, much worse. And I think he, he's right to likening that to something that, you know, has secretly gotten much more expensive for us, that the deal for us as consumers has gotten worse and worse. And I really like that framing because again, it kind of translates out from the issues that where we live, which is, you know, privacy and free speech and fairness and turns it into something that is actually kind of an economic framing of some of the same points.

I think that the kind of upshot of Tim and, and honestly, some of the other people we've talked to is this idea of ‘blitzscaling’, um, and growing gigantic platforms is really at the heart of a lot of the problems that we're seeing in free speech and in privacy and also in economic fairness. And I think that's a point that Tim makes very well.

I think that from, you know, The Attention Merchants, The Curse of Bigness, Tim has been writing in this space for a while, and he, what I appreciate is Tim is really a person, um, who came up in the Internet, he understands the Internet, he understands a lot of the values, and so he's, he's not writing as an outsider throwing rocks as much as an insider who is kind of dismayed at how things have gone and looking to try to unpack all of the problems. And I think his observation, which is shared by a lot of people, is that a lot of the problems that we're seeing inside tech are also problems we're seeing outside tech. It's just that tech is new enough that they really took over pretty fast.

But I think that it's important for us to both recognize the problems inside tech and it doesn't let tech off the hook. To note that these are broader societal problems, but it may help us in thinking about how we get out of them.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode you heard Perspectives *** by J.Lang featuring Sackjo22 and Admiral Bob, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll talk to you again soon.

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.

Podcast Episode: About Face (Recognition)

Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here. 

In this episode, you’ll learn about: 

  • The difficulty of anticipating how information that you freely share might be used against you as technology advances. 
  • How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use. 
  • The racial biases that were built into many face recognition technologies.  
  • How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow. 

Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KASHMIR HILL
Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts.

But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money.

And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in.

And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used?

CINDY COHN
That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade.

She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI.

Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement.

I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right.

JASON KELLEY
It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode.

Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation.

KASHMIR HILL
Clearview AI scraped billions of photos from the internet -

JASON KELLEY
Billions with a B. Sorry to interrupt you, just to make sure people hear that.

KASHMIR HILL
Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database.

They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears.

So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there.

JASON KELLEY

Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit?

KASHMIR HILL

Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten.

I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are.
On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos.

It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it.

CINDY COHN
Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main.

KASHMIR HILL
Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up.

But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases.

So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information.

We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large.

CINDY COHN
Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention.
It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission.

I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right?

For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash.

But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash?

KASHMIR HILL
Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from.

And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to.

And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials.

But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though.

JASON KELLEY
We were very upset about this.

CINDY COHN
Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened.

KASHMIR HILL
And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters.

You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen.

CINDY COHN
I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing.

Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images.

KASHMIR HILL
And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with.

JASON KELLEY
What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right?

KASHMIR HILL
I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves.

And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face.

So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill.

CINDY COHN
So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building

KASHMIR HILL
The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades.
The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well.

But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans.

And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness.

CINDY COHN
I love that term.

KASHMIR HILL
This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back.

So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first?

And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it.

And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology.

So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left.

And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country.

And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd.

And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage.

CINDY COHN
I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't.

JASON KELLEY
One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example.

KASHMIR HILL
So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails.

And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps.

It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots.

You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken.

But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database.

So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways.

CINDY COHN
And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it.

And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system.

And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data.

KASHMIR HILL
Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with.

And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers.

That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets.

And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore.

Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school.

And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be?

CINDY COHN
I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives.

KASHMIR HILL
I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him.

Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated.

And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives.

CINDY COHN
Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right?

Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed.

KASHMIR HILL
In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology.

And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden.

The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling.

CINDY COHN
I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right?

Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that.

And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize.

KASHMIR HILL
Laws work is one of the themes of the book.

CINDY COHN
Thank you so much, Kash, for joining us. It was really fun to talk about this important topic.

KASHMIR HILL
Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you.

JASON KELLEY
That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF.

CINDY COHN
Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story.

JASON KELLEY
Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did.

And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one.

CINDY COHN
Yeah, I love that perspective, right?

JASON KELLEY
I mean, it’s a terrible thing, but it is helpful to think about, right?

CINDY COHN
Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across.

The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws.
There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look.

So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here.

And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones.

JASON KELLEY
Exactly.

CINDY COHN
Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well.

You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story.

JASON KELLEY
Thanks for joining us for this episode of how to fix the internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon.

And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Why U.S. House Members Opposed the TikTok Ban Bill

What do House Democrats like Alexandria Ocasio-Cortez and Barbara Lee have in common with House Republicans like Thomas Massie and Andy Biggs? Not a lot. But they do know an unconstitutional bill when they see one.

These and others on both sides of the aisle were among the 65 House Members who voted "no" yesterday on the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521, which would effectively ban TikTok. The bill now goes to the Senate, where we hope cooler heads will prevail in demanding comprehensive data privacy legislation instead of this attack on Americans' First Amendment rights.

We're saying plenty about this misguided, unfounded bill, and we want you to speak out about it too, but we thought you should see what some of the House Members who opposed it said, in their own words.

 

I am voting NO on the TikTok ban.

Rather than target one company in a rushed and secretive process, Congress should pass comprehensive data privacy protections and do a better job of informing the public of the threats these companies may pose to national security.

— Rep. Barbara Lee (@RepBarbaraLee) March 13, 2024

   ___________________ 

Today, I voted against the so-called “TikTok Bill.”

Here’s why: pic.twitter.com/Kbyh6hEhhj

— Rep Andy Biggs (@RepAndyBiggsAZ) March 13, 2024

   ___________________

Today, I voted against H.R. 7521. My full statement: pic.twitter.com/9QCFQ2yj5Q

— Rep. Nadler (@RepJerryNadler) March 13, 2024

   ___________________ 

Today I claimed 20 minutes in opposition to the TikTok ban bill, and yielded time to several likeminded colleagues.

This bill gives the President far too much authority to determine what Americans can see and do on the internet.

This is my closing statement, before I voted No. pic.twitter.com/xMxp9bU18t

— Thomas Massie (@RepThomasMassie) March 13, 2024

   ___________________ 

Why I voted no on the bill to potentially ban tik tok: pic.twitter.com/OGkfdxY8CR

— Jim Himes 🇺🇸🇺🇦 (@jahimes) March 13, 2024

   ___________________ 

I don’t use TikTok. I find it unwise to do so. But after careful review, I’m a no on this legislation.

This bill infringes on the First Amendment and grants undue power to the administrative state. pic.twitter.com/oSpmYhCrV8

— Rep. Dan Bishop (@RepDanBishop) March 13, 2024

   ___________________ 

I’m voting NO on the TikTok forced sale bill.

This bill was incredibly rushed, from committee to vote in 4 days, with little explanation.

There are serious antitrust and privacy questions here, and any national security concerns should be laid out to the public prior to a vote.

— Alexandria Ocasio-Cortez (@AOC) March 13, 2024

   ___________________ 

We should defend the free & open debate that our First Amendment protects. We should not take that power AWAY from the people & give it to the government. The answer to authoritarianism is NOT more authoritarianism. The answer to CCP-style propaganda is NOT CCP-style oppression. pic.twitter.com/z9HWgUSMpw

— Tom McClintock (@RepMcClintock) March 13, 2024

   ___________________ 

I'm voting no on the TikTok bill. Here's why:
1) It was rushed.
2) There's major free speech issues.
3) It would hurt small businesses.
4) America should be doing way more to protect data privacy & combatting misinformation online. Singling out one app isn't the answer.

— Rep. Jim McGovern (@RepMcGovern) March 13, 2024

    ___________________

Solve the correct problem.
Privacy.
Surveillance.
Content moderation.

Who owns #TikTok?
60% investors - including Americans
20% +7,000 employees - including Americans
20% founders
CEO & HQ Singapore
Data in Texas held by Oracle

What changes with ownership? I’ll be voting NO. pic.twitter.com/MrfROe02IS

— Warren Davidson 🇺🇸 (@WarrenDavidson) March 13, 2024

   ___________________ 

I voted no on the bill to force the sale of TikTok. Unlike our adversaries, we believe in freedom of speech and don’t ban social media platforms. Instead of this rushed bill, we need comprehensive data security legislation that protects all Americans.

— Val Hoyle (@RepValHoyle) March 13, 2024

    ___________________

Please tell the Senate to reject this bill and instead give Americans the comprehensive data privacy protections we so desperately need.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Protect Yourself from Election Misinformation

Welcome to your U.S. presidential election year, when all kinds of bad actors will flood the internet with election-related disinformation and misinformation aimed at swaying or suppressing your vote in November. 

So… what’re you going to do about it? 

As EFF’s Corynne McSherry wrote in 2020, online election disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events. Still, election misinformation and disinformation continue to proliferate online and off. 

That being said, regulation is not typically an effective or human rights-respecting way to address election misinformation. Even well-meaning efforts to control election misinformation through regulation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression. Indeed, any content regulation must be scrutinized to avoid inadvertently affecting meaningful expression: Is the approach narrowly tailored or a categorical ban? Does it empower users? Is it transparent? Is it consistent with human rights principles? 

 While platforms and regulators struggle to get it right, internet users must be vigilant about checking the election information they receive for accuracy. There is help. Nonprofit journalism organization ProPublica published a handy guide about how to tell if what you’re reading is accurate or “fake news.” The International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:

To make sure you’re getting good information about how your election is being conducted, check in with trusted sources including your state’s Secretary of State, Common Cause, and other nonpartisan voter protection groups, or call or text 866-OUR-VOTE (866-687-8683) to speak with a trained election protection volunteer. 

And if you see something, say something: You can report election disinformation at https://reportdisinfo.org/, a project of the Common Cause Education Fund. 

 EFF also offers some election-year food for thought: 

  • On EFF’s “How to Fix the Internet” podcast, Pamela Smith—president and CEO of Verified Voting—in 2022 talked with EFF’s Cindy Cohn and Jason Kelley about finding reliable information on how your elections are conducted, as part of ensuring ballot accessibility and election transparency.
  • Also on “How to Fix the Internet”, Alice Marwick—cofounder and principal researcher at the University of North Carolina, Chapel Hill’s Center for Information, Technology and Public Life—in 2023 talked about finding ways to identify and leverage people’s commonalities to stem the flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out. She discussed why seemingly ludicrous conspiracy theories get so many views and followers; how disinformation is tied to personal identity and feelings of marginalization and disenfranchisement; and when fact-checking does and doesn’t work.
  • EFF’s Cory Doctorow wrote in 2020 about how big tech monopolies distort our public discourse: “By gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.” 

An effective democracy requires an informed public and participating in a democracy is a responsibility that requires work. Online platforms have a long way to go in providing the tools users need to discern legitimate sources from fake news. In the meantime, it’s on each of us. Don’t let anyone lie, cheat, or scare you away from making the most informed decision for your community at the ballot box. 

Podcast Episode: 'I Squared' Governance

Imagine a world in which the internet is first and foremost about empowering people, not big corporations and government. In that world, government does “after-action” analyses to make sure its tech regulations are working as intended, recruits experienced technologists as advisors, and enforces real accountability for intelligence and law enforcement programs.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Ron Wyden has spent decades working toward that world, first as a congressman and now as Oregon’s senior U.S. Senator. Long among Congress’ most tech-savvy lawmakers, he helped write the law that shaped and protects the internet as we know it, and he has fought tirelessly against warrantless surveillance of Americans’ telecommunications data. Wyden speaks with EFF’s Cindy Cohn and Jason Kelley about his “I squared” —individuals and innovation—legislative approach to foster an internet that benefits everyone. 

In this episode you’ll learn about: 

  • How a lot of the worrisome online content that critics blame on Section 230 is actually protected by the First Amendment 
  • Requiring intelligence and law enforcement agencies to get warrants before obtaining Americans’ private telecommunications data 
  • Why “foreign” is the most important word in “Foreign Intelligence Surveillance Act” 
  • Making government officials understand national security isn’t heightened by reducing privacy 
  • Protecting women from having their personal data weaponized against them 

U.S. Sen. Ron Wyden, D-OR, has served in the Senate since 1996; he was elected to his current six-year term in 2022. He chairs the Senate Finance Committee, and serves on the Energy and Natural Resources Committee, the Budget Committee, and the Select Committee on Intelligence; he also is the lead Senate Democrat on the Joint Committee on Taxation. His relentless defiance of the national security community's abuse of secrecy forced the declassification of the CIA Inspector General's 9/11 report, shut down the controversial Total Information Awareness program, and put a spotlight on both the Bush and Obama administrations’ reliance on "secret law." In 2006 he introduced the first Senate bill on net neutrality, and in 2011 he was the lone Senator to stand against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), ultimately unsuccessful bills that purportedly were aimed at fighting online piracy but that actually would have caused significant harm to the internet. Earlier, he served from 1981 to 1996 in the House of Representatives, where he co-authored Section 230 of the Communications Decency Act of 1996—the law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on.

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

SENATOR RON WYDEN
It's been all about two things, individuals and innovation. I call it “I squared,” so to speak, because those my principles. If you kind of follow what I'm trying to do, it's about individuals, it's about innovation. And you know, government has a role in playing to guardrails and ensuring that there are competitive markets. But what I really want to do is empower individuals.

CINDY COHN
That's U.S. Senator Ron Wyden of Oregon. He is a political internet pioneer. Since he was first elected to the Senate in 1996, he has fought for personal digital rights, and against corporate and company censorship, and for sensible limits on government secrecy.

[THEME MUSIC BEGINS]

CINDY COHN
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley - EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives better. And sometimes when we think about the lawmakers in our country, we often think of the conflict and fighting and people who just don’t get it when it comes to how digital works. But there are also some people in the legislatures who have worked to enact real progress.

JASON KELLEY
Our guest this week is one of the giants in the political fight for internet freedom for several decades now. Senator Wyden played a critical role in the passage of Section 230 — a pillar of online freedom of speech that has recently been coming under attack from many different sides. And he introduced the first Senate net neutrality bill back in 2006. He’s consistently pushed back against mass surveillance and pushed for a strong Fourth Amendment, and over the years, he has consistently fought for many of the things that we are fighting for here at EFF as well.

CINDY COHN
Our conversation takes a look back at some of the major milestones of his career, decisions that have directly impacted all of our online lives. And we talk about the challenges of getting Section 230 passed into law in the first place. But more recently, Senator Wyden also talks about why he was strongly opposed to laws like FOSTA-SESTA, which undermined the space that Section 230 creates for some online speakers, using the cover of trying to stop sex trafficking on the internet.

JASON KELLEY
But like us at EFF, Senator Wyden is focusing on the battles happening right now in Congress that could have a fundamental impact on our online lives. When he was elected in the ‘90s, the focus was on the explosion and rapid expansion of the internet. Now he’s thinking about the rapid expansion of artificial intelligence, and how we can make sure that we put the individual before the profits of corporations when it comes to AI.

CINDY COHN
Our conversation covers a lot of ground but we wanted to start with Senator Wyden’s own view of what a good tech future would look like for all of us.

SENATOR RON WYDEN
Well, it's one that empowers the individual. You know, consistently, the battles around here are between big interest groups. And what I want to do is see the individual have more power and big corporations and big government have less as it relates to communications.

CINDY COHN
Yeah. So what would that look like for an ordinary user? What kinds of things might be different?

SENATOR RON WYDEN
What we'd have, for example, is faster adoption of new products and services for people showing greater trust in emergency technologies. We'd build on the motivations that have been behind my privacy bills, the Fourth Amendment Is Not For Sale, for example, Section 230, the Algorithm Accountability Act. Cindy, in each one of these, it's been all about two things: individuals and innovation.

JASON KELLEY
I'm wondering if you're surprised by the way that things have turned out in any specific instance, you know, you had a lot of responsibility for some really important legislation for CDA 230, scaling back some NSA spying issues, helping to stop SOPA-PIPA, which are all, you know, really important to EFF and to a lot of our listeners and supporters. But I'm wondering if, you know, despite that, you've seen surprises in where we are that you didn't expect.

SENATOR RON WYDEN
I didn't expect to have so many opponents across the political spectrum for Section 230. I knew we would have some, but nothing has been the subject of more misinformation than 230. You had Donald Trump, the President of the United States, lying about Section 230 over and over again. I don't think Donald Trump would know what Section 230 was if it hit him in the head, but he was always lying about vote by mail and all those kinds of things.
And huge corporate interests like Big Cable and legacy media have bankrolled massive lobbying and PR campaigns against 230. Since they saw user-created content and the ability of regular people to be heard as a threat to their top-down model, all those big guys have been trying to invent reasons to oppose 230 that I could not have dreamed of.
So I'm not saying, I don't think Chris Cox would say it either, that the law is perfect. But when I think about it, it's really a tool for individuals, people without power, without clout, without lobbies, without big checkbooks. And, uh, you know, a lot of people come up to me and say, "Oh, if you're not in public life, 230 will finally disappear" and all this kind of thing. And I said, I think you're underestimating the power of people to really see what this was all about, which was something very new, a very great opportunity, but still based on a fundamental principle that the individual would be responsible for what they posted in this whole new medium and in the United States individual responsibility carries a lot of weight.

CINDY COHN
Oh, I so agree, and I think that one of the things that we've seen, um, with 230 but with a lot of other things now, is a kind of a correct identification of the harm and a wrong identification of what's causing it or what will solve it. So, you know, there are plenty of problems online, but, um, I think we feel, and I think it sounds like you do as well, that we're playing this funny little whack-a-mole game where whatever the problem is, somebody's sliding in to say that 230 is the reason they have that problem, when a lot of times it has to do with something, you know, not related. It could even be, in many cases, the U. S. Constitution, but also kind of misindentifying –

SENATOR RON WYDEN
Cindy, there's a great story that I sometimes tell. The New York Times one day had a big picture of Chris Cox and I, it was practically a full-length page. I'm 6'4", went to college on a basketball scholarship dreaming of playing in the NBA, and they said “these two people are responsible for all the hate information online and 230 empowered people to do it.” And we hardly ever do this, but Keith Chu, our wonderful expert on all things technology, finally touched base with him and said, "you know that if there was no 230, over 95 percent of what we see online that we really dislike — you know, misogyny, hate speech, racism — would still be out there because of the First Amendment, not 230."
And the New York Times, to its credit, printed a long, long apology essentially the next day, making the case that that was really all about the First Amendment, not 230. 230 brought added kind of features to this, particularly the capacity to moderate, which was so important in a new opportunity to communicate.

[MUSIC FADES IN]

CINDY COHN
What drives you towards building a better internet? So many people in Congress in your town don't really take the time to figure out what's going on, much less propose real solutions. They kind of, you know, we've been in this swing where they, they treated the technologies like heroes and now we're in a time when they're treating them like villains. But what drives you to, to kind of figure out what's actually going on and propose real solutions?

SENATOR RON WYDEN
I showed up, Cindy, Oregon's first new United States senator in 34 years, in 1996, the winner, and the only person who knew how to use a computer at that point was, uh, Pat Leahy, who was a great advocate of technology and, and innovation. I said, "I'm going to get into new stuff." In other words, Oregon had always been about wood products. We always will be about wood products and I will continue to champion those kinds of practices, particularly now we're working to prevent these huge fires. I also said we're going to get into new things. And my dad was a journalist and he said, "You're not doing your job if you don't ask hard questions every single day."
So what we tried to do, particularly in those first days, is kind of lay the foundation, just do the foundational principles for the internet. I mean, there's a book, Jeff Kossoff wrote “26 Words That Created the Internet,” but we also had internet tax policy to promote non-discrimination, so you wouldn't be treated different online than you would be offline.
Our digital signatures law, I think, has been a fabulous, you know, addition. People used to spend hours and hours in offices, you know, kind of signing these documents that look like five phone books stacked on top of each other, and they'd be getting through it in 15, 20 minutes. So, um, to me, what I think we showed is that you could produce more genuine innovation by thinking through what was to come than just lining the pocketbooks of these big entrenched interests. Now, a big part of what we're going to have to do now with AI is go through some of those same kinds of issues. You know, I think for example, we're all in on beating China. That's important. We're all in on innovation, but we've got to make sure that we cement bedrock, you know, privacy and accountability.
And that's really what's behind the Algorithm Accountability Act because, you know, what we wanted to do when people were getting ripped off in terms of housing and education and the like with AI, we wanted to get them basic protection.

JASON KELLEY
It sounds like you're, you know, you're already thinking about this new thing, AI, and in 20 or more years ago, you were thinking about the new thing, which is posting online. How do we get more of your colleagues to sort of have that same impulse to be interested in tackling those hard questions that you mentioned? I think we always wonder what's missing from their views, and we just don't really know how to make them sort of wake up to the things that you get.

SENATOR RON WYDEN
What we do is particularly focus on getting experienced and knowledgeable and effective staff. I tell people I went to school on a basketball scholarship. I remember recruiting, we kind of recruit our technologists like they were all LeBron James, and kind of talking about, you know, why there were going to be opportunities here. And we have just a terrific staff now, really led by Chris Segoyan and Keith Chu.
And it's paid huge dividends, for example, when we look at some of these shady data broker issues, government surveillance. Now, with the passing of my, my friend Dianne Feinstein,  one of the most senior members in the intelligence field and, uh,  these incredibly good staff allow me to get into these issues right now I'm with Senator Moran, Jerry Moran of Kansas trying to upend the declassification system because it basically doesn't declassify anything and I'm not sure they could catch bad guys, and they certainly are hanging on to stuff that is irresponsible, uh, information collection about innocent people.

[SHORT MUSIC INTERLUDE]

CINDY COHN
These are all problems that, of course, we're very deep in and,  we do appreciate that you, you know, our friend, Chris Segoyan,  who EFF's known for a long time and other people you've brought in really good technologists and people who understand technology to advise you. How do we get more senators to do that too? Are there things that we could help build that would make that easier?

SENATOR RON WYDEN
I think there are, and I think we need to do more, not post-mortems, but sort of more after-action kind of analysis. For example, the vote on SESTA-FOSTA was 98 to 2. And everybody wasn't sure where the other vote was, and Rand Paul came up to me and said, "You're right, so I'm voting with you."
And, uh, the point really was, you know, everybody hated the scourge of sex trafficking and the like. I consider those people monsters. But I pointed out that all you're going to do is drive them from a place where there was transparency to the dark web, where you can't get a search engine. And people go, "Huh? Well, Ron's telling us, you know, that it's going to get worse." And then I offered an amendment to basically do what I think would have really made a difference there, which is get more prosecutors and more investigators going after bad guys. And the ultimate factor that would be good, as I say, to have these sort of after-action, after-legislating kind of things, is everybody said, "Well, you know, you've got to have SESTA-FOSTA, or you're never going to be able to do anything about Backpage. This was this horrible place that, you know, there were real problems with respect to sex trafficking. And what happened was, Backpage was put out of business under existing law, not under SESTA-FOSTA, and when you guys have this discussion with, you know, people who are following the program and ask them, ask them when their senator or congressperson last had a press conference about SESTA-FOSTA.
I know the answer to this. I can't find a single press conference about SESTA-FOSTA, which was ballyhooed at the time as this miraculous cure for dealing with really bad guys, and the technology didn't make sense and the education didn't make sense, and the history with Backpage didn't make any sense and it's because people got all intoxicated with these, you know, ideas that somehow they were going to be doing this wondrous, you know, thing and it really made things worse.

CINDY COHN
So I'm hearing three things in the better world. One, and the one you've just mentioned, is that we actually have real accountability, that when we pass some kind of regulation, we take the time to look back and see whether it worked; that we have informed people who are helping advise or actually are the lawmakers and the regulators who understand how things, uh, really work.
And the third one is that we have a lot more accountability inside government around classification and secrecy, especially around things involving, you know, national security. And, you know, you're in this position, right, where you are read in as a member of the Intelligence Committee. So you kind of see what the rest of us don't. And I'm wondering, obviously I don't want you to reveal anything, but you know, are there, is that gap an important one that we close?

SENATOR RON WYDEN
Yeah, I mean, you know, there have been a lot of 14-to-1 votes in the Intelligence Committee over the, over the years, and, you know, I've been the one, and you know, the reality is people often get swept up in these kinds of arguments, particularly from people in government, like, we're having a big debate about surveillance now, Section 702, and, you know, everybody's saying, "Ron, what are you talking about? You're opposing this, you know, we face all these, all these kinds of, kinds of threats," and, um, you know, what I've always said is, read the title of the bill, Foreign Intelligence Surveillance Act, that means we're worried about foreign intelligence, we're not, under that law supposed to be sweeping up the records of vast numbers of Americans who are interconnected to those foreign individuals by virtue of the fact that communication systems have changed.
And I personally believe that smart policies ensure that you can fight terror ferociously while still protecting civil liberties, and not-so-smart policies give you less of both.

JASON KELLEY
How do we get to that balance that you're talking about, where, you know, I know a lot of people feel like we do have to have some level of surveillance to protect national security, but that balance of protecting the individual rights of people is a complicated one. And I'm wondering how you think about what that looks like for people.

SENATOR RON WYDEN
Well, for example, Zoe Lofgren, you know, Zoe has been a partner of mine on many projects. I know she's been sympathetic with all of you all, well, for many years in her service as a member from California. You know, what we said on our 702 reforms, and by the way, we had a whole bunch of Republicans, there needs to be a warrant requirement. If you're going after the personal data of Americans, there should be a warrant requirement.

Now, we were then asked, "Well, what happens if it's some kind of imminent kind of crisis?" And I said, what I've always said is that all my bills, as it relates to surveillance, have a warrant exception, which is if the government believes that there is an imminent threat to the security of our country and our people, the government can go up immediately and come back and settle the warrant matter afterwards. And at one point I was having a pretty vigorous debate with the President and his people, then-President Obama. And I said, "Mr. President, if the warrant requirement exception isn't written right, you all write it and I'm sure we'll work it out."
But I think that giving the government a wide berth to make an assessment about whether there is a real threat to the country and they're prepared to not only go up immediately to get the information, but to trust the process later on to come back and show that it was warranted. I think it's a fair balance. That's the kind of thing I'm working on right now.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Senator Ron Wyden and his work on privacy laws.

SENATOR RON WYDEN
Really, the first big law that I got passed involved privacy rights of Americans outside the country. So we had won a bunch of battles before that, you know, defeating John Poindexter, Total Information Awareness, and a variety of other battles.
But when I started this, trying to protect the privacy rights of Americans who are outside the United States, you would have thought that Western civilization was going to end. And this was the Bush administration. And the DNI, the head of national intelligence, talked to me. He said, "Ron, this is just going to be disastrous. It's going to be horrible."
And I walked him through who we were talking about. And I said, the biggest group of people we're talking about are men and women who wear the uniform in the United States because they are outside the United States. You can't possibly be telling me, Director McConnell, it was Director McConnell at that time, that they shouldn't have privacy rights. And then things kind of moved and I kept working with them and they still said that this was going to be a tremendous threat and all the rest. They were going to veto it. They actually put out a statement about there would be a veto message. So I worked with them a little bit more and we worked it out. And when we were done, the Bush administration put out something, and we are proud to say that we are protecting the privacy rights of Americans outside the United States.
So, if you can just take enough time and be persistent enough, you can get things done. And now, we actually have elected officials and presidents of both political parties all taking credit for the privacy rights of people outside the United States.

[MUSIC STING COMES IN TO INTRO CLIP]

SENATOR RON WYDEN ON CSPAN
A yes or no answer to the question, does the NSA collect any type of data at all on millions or hundreds of millions of Americans?

JAMES CLAPPER ON CSPAN
No sir.

SENATOR RON WYDEN ON CSPAN
It does not.

JAMES CLAPPER ON CSPAN
Not wittingly. There are cases where they could inadvertantly, perhaps, collect but not, not wittingly.

CINDY COHN
That's a clip from CSPAN, a pretty famous interaction you had with James Clapper in 2013. But I think the thing that really shines through with you is your ability to walk this fine line — you're very respectful of the system, even in an instance like this where someone is lying under oath right in your face, you know you have to work within the system to make change. How do you navigate that in the face of lies and misdirection?

SENATOR RON WYDEN
Well, you have to take the time to really tee it up, and I really credit John Dickus of Oregon, our staffer at the time, did a phenomenal job. He spent about six months teeing that question up for Mr. Clapper and what happened is his deputy — Mr. Clapper's deputy, Keith Alexander — had been telling what my 11-year-old daughter — my wife and I are older parents — we have this 11-year-old. She said, "Dad, that was a big whopper. That guy told a big whopper." Keith Alexander told a bunch of whoppers. And then Mr. Clapper did. And this had all been done in public. And so we asked for answers. He wouldn't give any answers. Then he came to the one, um, you know, open-threat hearing that we have each year. And we prepare for those open threat hearings like there is no tomorrow, because you don't get very many opportunities to have a chance to ask, you know, the important questions. And so John Dickus sent to Mr. Clapper, he sent him the question a day in advance, so that nobody could say that they hadn't gotten it, and it's an informal rule in the Intelligence Committee that if an official feels that they can't answer, they just say, "I can't answer, I have to do it in private." I wouldn't have liked that answer. But I would have respected it and tried to figure out some other way, but James Clapper got the question, looked at the camera, looked at me, and just lied and persisted in coming up — he had like five or six excuses for how he wasn't lying. And I think as the country found out what was going on, it was a big part of our product to produce the next round of laws that provided some scrutiny over the Patriot Act.

CINDY COHN
I think that's a really important kind of insight, right? Which is the thing that led to people being upset about the kind of massive surveillance and understanding it was kind of the lie, right? Like if there was more transparency on the part of the national security people and they didn't just tell themselves that they have to lie to all the rest of us, you know, in order to keep us safe, which I think is a very, very dangerous story in a democracy, we might end up in a much more reasonable place for everyone about privacy and security. And I actually don't think it's a balance. I think that you only get security if you have privacy, rather than they have to be traded off against them, and –

SENATOR RON WYDEN
You're a Ben Franklin person, Cindy. Anybody who gives up liberty to have security doesn't deserve either.

CINDY COHN
Well, I think that that's kind of right, but I also think that, you know, the history has shown that the intense secrecy, overbroad secrecy actually doesn't make us safer. And I think this goes back to your point about accountability, where we really do need to look back and say these things that have been embraced as allegedly making us safer, are they actually making us safer or are we better off having a different role for secrecy — not that there's no role, but then the one that has been, you know, kind of, it's an all-purpose excuse that no matter what the government does, it just uses the secrecy argument to make sure that the American people can't find out so that we don't, you know, evaluate whether things are working or not.
I just don't think that the, you know, my experience watching these things, and I don't know about yours, is that the overblown secrecy isn't actually making us safer.

[SHORT MUSIC INTERLUDE]

JASON KELLEY
Before we wrap up, we wanted to get a sense from you of what issues you see coming in the next three years or so that we're going to need to be thinking about to be ahead of the game. What's at the top of your mind looking forward?

SENATOR RON WYDEN
The impact of the Dobbs decision repealing Roe v. Wade is going to have huge ripple effects through our society. I believe, you know, women are already having their personal information weaponized. against them. And you're seeing it in states with, you know, MAGA attorneys general, but you're also seeing it – we did a big investigation of pharmacies. And pharmacies are giving out women's personal information hither and, and yon. And, you know, we're very much committed to getting privacy rights here. And I also want to congratulate EFF on your Who's Got Your Back report, because you really are touching on these same kinds of issues, and I think getting a warrant ought to be really important.
And the other one I mentioned is, uh, fighting government censorship. And I would put that both at home and abroad. It's no secret that China, Russia, and India want to control what people can say and read, but you know, if you look at some of what, you know, we're seeing in this country, the U.S. trade representative taking a big step backwards in terms of access to information, we're going to have to deal with that in here in our country too.

CINDY COHN
Oh, those are wonderful and scary, but wonderful and important things. I really appreciate you taking the time to talk to us. It's always such a pleasure and we are huge fans of the work that you've done, and thank you so much for carrying, you know, the “I squared,” individuals and innovation. Those are two values close to our hearts here at EFF and we really appreciate having you in Congress championing that as well

SENATOR RON WYDEN
I don't want to make this a bouquet-tossing contest, but we've had a lot of opportunities to work, work together and, you know, EFF is part of the Steppin' Up Caucus and, uh, really appreciate it and, uh, let's put this in "to be continued," okay?

CINDY COHN
Terrific.

SENATOR RON WYDEN
Thanks, guys.

CINDY COHN
I really could talk with Senator Wyden all day and specifically talk with him about national security all day, but what a great conversation. And it's so refreshing to have somebody who's experienced in Congress who really is focusing on two of the most important things that EFF focuses on as well. I love the framing of I squared, right? Individuals and innovation as the kind of centerpiece of a better world.

JASON KELLEY
Yeah. And you know, he's not just saying it, it's clear from his bills and his work over the years that he really does center those things. Innovation and individuals are really the core of things like Section 230 and many other pieces of legislation that he's worked on, which, it's just really nice and refreshing to hear someone who has a really strong ethos in the Senate and has the background to show that he means it.

CINDY COHN
Yeah, and you know, sometimes we disagree with Senator Wyden, but it's always refreshing to feel like, well, we're all trying to point in the same direction. We sometimes have disagreements about how to get there.

JASON KELLEY
Yeah. And one of the great things about working with him is that, you know, he and his staff are tech-savvy, so our disagreements are often pretty nuanced, at least from what I can remember. You know, we aren't having disagreements about what a technology is or something like that very often. I think we're, we're usually having really good conversations with his folks, because he's one of the most tech-savvy staffers in the Senate, and he's helped really make the Senate more tech-savvy overall.

CINDY COHN
Yeah, I think that this is one of these pieces of a better internet that, that feels kind of indirect, but is actually really important, which is making sure that our lawmakers - you know, they don't all have to be technologists. We have a couple technologists in Congress now, but they really have to be informed by people who understand how technology works.
And I think one of the things that's important when we show up a lot of the times is really, you know, having a clear ability to explain to the people, you know, whether it's the congressional people themselves or their staff, like how things really work and having that kind of expertise in house is, I think, something that's going to be really important if we're going to get to a better internet.

JASON KELLEY
Yeah. And it's clear that we have still work to do. You know, he brought up SESTA-FOSTA and that's an instance where, you know, he understands and his staff understands that that was a bad bill, but it was still, as he said, you know, 98-2, when it came to the vote. And ultimately that was a tech bill. And I think if, if we had more, even more sort of tech-savvy folks, we wouldn't have had such a such a fight with that bill.

CINDY COHN
And I think that he also pointed to something really important, which was this idea of after analysis, after-action thinking and looking back and saying, "Well, we passed this thing, did it do what we had hoped it would do?" as a way to really have a process where we can do error correction. And I noted that, you know, Ro Khanna and Elizabeth Warren have actually, and Senator Wyden, have floated a bill to have an investigation into FOSTA-SESTA, which, you know, for, for those who, who don't know the shorthand, this was a way that Section 230 was cut back, and protection was cut back. And the idea is that it could help stop sex trafficking. Well, all the data that we've seen so far is that it did not do that. And in some ways made sex trafficking,  you know, in the offline environment more dangerous. But having Congress actually step in and do and sponsor the research to figure out whether the bill that Congress passed did the thing that they said is, I think, just a critical piece of how we decide what we're going to do in order to protect individuals and innovation online.

JASON KELLEY
Yeah. For me, you know, it's actually tied to something that I know a lot of tech teams do which is like a sort of post-mortem. You know, after something happens, you really do need to investigate how we got there, what worked and what didn't, but in this case we all know, at least at EFF, that this was a bad bill.

CINDY COHN
Yeah, I mean, sometimes it might be just taking what we know anecdotally and turning it into something that Congress can more easily see and digest. Um, I think the other thing, it's just impossible to talk with or about Senator Wyden without talking about national security because he has just been heroic in his efforts to try to make sure that we don't trade privacy off for security. And that we recognize that these two things are linked and that by lifting up privacy, we're lifting up national security.
And by reducing privacy, we're not actually making ourselves safer. And he really has done more for this. And I think what was heartening about this conversation was that, you know, he talked about how he convinced national security hawks to support something that stood with privacy, this story about kind of really talking about how most of the Americans abroad are affiliated in one way or another with the U.S. military, people who are stationed abroad and their families, and how standing up for their privacy and framing it that way, you know, ultimately led to some success for this. Now, we've got a long ways to go, and I think he'd be the first one to agree. But the kind of doggedness and willingness to be in there for the long haul and talk to the national security folks about how, how these two values support each other is something that he has really proven that he's willing to do and it's so important.

JASON KELLEY
Yeah, that's exactly right, I think, as well. And it's also terrific that he's looking to the future, you know, we do know that he's thinking about these things, you know, 702 has been an issue for a long time and he's still focused on it, but what did you think of his thoughts about what our coming challenges are — things like how to deal with data in in a post-Dobbs world, for example?

CINDY COHN
Oh, I think he's right on, right on it. He's recognizing, I think as a lot of people have, that the Dobbs decision, overturning Roe v. Wade has really made it clear to a lot of people how vulnerable we are, based upon the data that we have to leave behind in what we do every day. Now you can do things to try to protect them, but there's only so much we can do right now without changes in the law and changes in the way things go because you know, your phone needs to know where you are in order to ring when somebody calls you or ping when somebody texts you.
So we need legal answers and he's correct that this is really coming into the fore right now. I think he's also thinking about the challenges that artificial intelligence are bringing. So I really appreciate that he's already thinking about how we fix the internet, you know, in the coming years, not just right now.

JASON KELLEY
I'm really glad we had this bouquet-throwing contest, I think was what he called it. Something like that. But yeah, I think it's great to have an ally and have them be in the Senate and I know he feels the same way about us.

CINDY COHN
Oh, absolutely. I mean, you know, part of the way we get to a better internet is to recognize the people who are doing the right thing. And so, you know, we spend a lot of time at EFF throwing rocks at the people who are doing the wrong thing. And that's really important too. But occasionally, you know, we get to throw some bouquets to the people who are fighting the good fight.

[THEME MUSIC FADES IN]

JASON KELLEY

Thanks for joining us for this episode of How To Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.
In this episode you heard Kalte Ohren by Alex and Drops of H10 (The Filtered Water Treatment) by J. Lang
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll talk to you again soon.
I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Podcast Episode: Open Source Beats Authoritarianism

What if we thought about democracy as a kind of open-source social technology, in which everyone can see the how and why of policy making, and everyone’s concerns and preferences are elicited in a way that respects each person’s community, dignity, and importance?

play
Privacy info. This embed will serve content from simplecast.com


Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is what Audrey Tang has worked toward as Taiwan’s first Digital Minister, a position the free software programmer has held since 2016. She has taken the best of open source and open culture, and successfully used them to help reform her country’s government. Tang speaks with EFF’s Cindy Cohn and Jason Kelley about how Taiwan has shown that openness not only works but can outshine more authoritarian competition wherein governments often lock up data.

In this episode, you’ll learn about:

  • Using technology including artificial intelligence to help surface our areas of agreement, rather than to identify and exacerbate our differences 
  • The “radical transparency” of recording and making public every meeting in which a government official takes part, to shed light on the policy-making process 
  • How Taiwan worked with civil society to ensure that no privacy and human rights were traded away for public health and safety during the COVID-19 pandemic 
  • Why maintaining credible neutrality from partisan politics and developing strong public and civic digital infrastructure are key to advancing democracy. 

Audrey Tang has served as Taiwan's first Digital Minister since 2016, by which time she already was known for revitalizing the computer languages Perl and Haskell, as well as for building the online spreadsheet system EtherCalc in collaboration with Dan Bricklin. In the public sector, she served on the Taiwan National Development Council’s open data committee and basic education curriculum committee and led the country’s first e-Rulemaking project. In the private sector, she worked as a consultant with Apple on computational linguistics, with Oxford University Press on crowd lexicography, and with Socialtext on social interaction design. In the social sector, she actively contributes to g0v (“gov zero”), a vibrant community focusing on creating tools for the civil society, with the call to “fork the government.”

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

AUDREY TANG
In 2016, October, when I first became Taiwan's digital minister, I had no examples to follow because I was the first digital minister. And then it turns out that in traditional Mandarin, as spoken in Taiwan, digital, shu wei, means the same as “plural” - so more than one. So I'm also a plural minister, I'm minister of plurality. And so to kind of explain this word play, I wrote my job description as a prayer, as a poem. It's very short, so I might as well just quickly recite it. It goes like this:
When we see an internet of things, let's make it an internet of beings.
When we see virtual reality, let's make it a shared reality.
When we see machine learning, let's make it collaborative learning.
When we see user experience, let's make it about human experience.
And whenever we hear that a singularity is near, let us always remember the plurality is here.

CINDY COHN
That's Audrey Tang, the Minister of Digital Affairs for Taiwan. She has taken the best of open source and open culture, and successfully used them to help reform government in her country of Taiwan. When many other cultures and governments have been closing down and locking up data and decision making, Audrey has shown that openness not only works, but it can win against its more authoritarian competition.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is we're trying to make our digital lives better. We spend so much time imagining worst-case scenarios, and jumping into the action when things inevitably do go wrong online but this is a space for optimism and hope.

JASON KELLEY
And our guest this week is one of the most hopeful and optimistic people we've had the pleasure of speaking with on this program. As you heard in the intro, Audrey Tang has an incredibly refreshing approach to technology and policy making.

CINDY COHN
We approach a lot of our conversations on the podcast using Lawrence Lessig’s framework of laws, norms, architecture and markets – and Audrey’s work as the Minister of Digital Affairs for Taiwan combines almost all of those pillars. A lot of the initiatives she worked on have touched on so many of the things that we hold dear here at EFF and we were just thrilled to get a chance to speak with her.
As you'll soon hear, this is a wide-ranging conversation but we wanted to start with the context of Audrey's day-to-day life as Taiwan's Minister of Digital Affairs.

AUDREY TANG
In a nutshell I make sure that every day I checkpoint my work so that everyone in the world knows not just the what of the policies made, but the how and why of policy making.
So for easily more than seven years everything that I did in the process, not the result, of policymaking, is visible to the general public. And that allows for requests, essentially - people who make suggestions on how to steer it into a different direction, instead of waiting until the end of policymaking cycle, where they have to say, you know, we protest, please scratch this and start anew and so on.
No, instead of protesting, we welcome demonstrators that demonstrates better ways to make policies as evidenced during the pandemic, where we rely on the civil society lead contact tracing and counter pandemic methods and for three years we've never had a single day of lockdown.

JASON KELLEY
Something just popped into my head about the pandemic since you mentioned the pandemic. I'm wondering if your role shifted during that time, or if it sort of remained the same except to focus on a slightly different element of the job in some way.

AUDREY TANG
That's a great question. So entering the pandemic, I was the minister with a portfolio in charge of open government, social innovation and youth engagement. And during the pandemic, I assumed a new role, which is the cabinet Chief Information Officer. And so the cabinet CIO usually focuses on, for example, making tax paying easier, or use the same SMS number for all official communications or things like that.
But during the pandemic, I played a role of like a Lagrange Point, right? Between the gravity centers of Privacy protection, social movement on one side and protecting the economy, keep TSMC running on the other side, whereas many countries, I would say everyone other than say Taiwan, New Zealand and a handful of other countries, everyone assumed it would be a trade-off.
Like there's a dial you'll have to, uh, sacrifice some of the human rights, or you have to sacrifice some lives, right? A very difficult choice. We refuse to make such trade-offs.
So as the minister in charge of social innovation, I work with the civil society leaders who themselves are the privacy advocates, to design contact tracing systems instead of relying on Google or Apple or other companies to design those and as cabinet CIO, whenever there is this very good idea, we make sure that we turn it into production, making a national level the next Thursday. So there's this weekly iteration that takes the best idea from the civil society and make it work on a national level. And therefore, it is not just counter pandemic, but also counter infodemic. We've never had a single administrative takedown of speech during the pandemic. Yet we don't have an anti-vax political faction, for example.

JASON KELLEY
That's amazing. I'm hearing already a lot of, uh, things that we might want to look towards in the U.S.

CINDY COHN
Yeah, absolutely. I guess what I'd love to do is, you know, I think you're making manifest a lot of really wonderful ideas in Taiwan. So I'd like you to step back and you know, what does the world look like, you know, if we really embrace openness, we embrace these things, what does the bigger world look like if we go in this direction?

AUDREY TANG
Yeah, I think the main contribution that we made is that the authoritarian regimes for quite a while kept saying that they're more efficient, that for emerging threats, including pandemic, infodemic, AI, climate, whatever, top-down, takedown, lockdown, shutdowns are more effective. And when the world truly embraces democracy, we will be able to pre-bunk – not debunk, pre-bunk – this idea that democracy only leads to chaos and only authoritarianism can be effective. If we do more democracy more openly, then everybody can say, oh, we don't have to make those trade-offs anymore.
So, I think when the whole world embraces this idea of plurality, we'll have much more collaboration and much more diversity. We won't refuse diversity simply because it's difficult to coordinate.

JASON KELLEY
Since you mentioned democracy, I had heard that you have this idea of democracy as a social technology. And I find that really interesting, partly because all the way back in season one, we talked to the chief innovation officer for the state of New Jersey, Beth Noveck, who talked a lot about civic technology and how to facilitate public conversations using technology. So all of that is a lead-in to me asking this very basic question. What does it mean when you say democracy is a social technology?

AUDREY TANG
Yeah. So if you look at democracy as it's currently practiced, you'll see voting, for example, if every four years someone votes for among, say, four presidential candidates, that's just two bits of information uploaded from each individual and the latency is very, very long, right? Four years, two years, one year.
Again, when emerging threats happen, pandemic, infodemic, climate, and so on, uh, they don't work on a four year schedule. They just come now and you have to make something next Thursday, in order to counter it at its origin, right? So, democracy, as currently practiced, suffers from the lack of bandwidth, so the preference of citizens are not fully understood, and latency, which means that the iteration cycle is too long.
And so to think of democracy as a social technology is to think about ways that make the bandwidth wider. To make sure that people's preferences can be elicited in a way that respects each community's dignities, choices, context, instead of compressing everything into this one dimensional poll results.
We can free up the polls so that it become wiki surveys. Everybody can write those polls, questions together. It can become co-creation. People can co-create a constitutional document for the next generation of AI that aligns itself to that document, and so on and so forth. And when we do this, like, literally every day, then also the latency shortens, and people can, like a radar, sense societal risks and come up with societal solutions in the here and now.

CINDY COHN
That's amazing. And I know that you've helped develop some of the actual tools. Or at least help implement them, that do this. And I'm interested in, you know, we've got a lot of technical people in our audience, like how do you build this and what are the values that you put in them? I'm thinking about things like Polis, but I suspect there are others too.

AUDREY TANG
Yes, indeed. Polis is quite well known in that it's a kind of social media that instead of polarizing people to drive so called engagement or addiction or attention, it automatically drives bridge making narratives and statements. So only the ideas that speak to both sides or to multiple sides will gain prominence in Polis.
And then the algorithm surfaces to the top so that people understand, oh, despite our seeming differences that were magnified by mainstream and other antisocial media, there are common grounds, like 10 years ago when UberX first came to Taiwan, both the Uber drivers and taxi drivers and passengers all actually agreed that insurance registration not undercutting existing meters. These are important things.
So instead of arguing about abstract ideas, like whether it's sharing economy, or extractive gig economy, uh, we focus, again, on the here and now and settle the ideas in a way that's called rough consensus. Meaning that everybody, maybe not perfectly, live with it, can live with it.

CINDY COHN
I just think they're wonderful and I love the flipping of this idea of algorithmic decision making such that the algorithm is surfacing places of agreement, and I think it also does some mapping as well about places of agreement instead of kind of surfacing the disagreement, right?
And that, that is really, algorithms can be programmed in either direction. And the thinking about how do you build something that brings stuff together to me is just, it's fascinating and doubly interesting because you've actually used it in the Uber example, and I think you've used some version of that also back in the early work with the Sunflower movement as well.

AUDREY TANG
Yeah, the Uber case was 2015, and the Sunflower Movement was, uh, 2014, and at 2014, the Ma Ying-jeou administration at the time, um, had a approval rate for citizens of less than 10%, which means that anything the administration says, the citizens ultimately don't believe, right? And so instead of relying on traditional partisan politics, which totally broke down circa 2014, Ma Ying-jeou worked with people that came from the tech communities and named, uh, Simon Chang from Google, first as vice premier and then as premier. And then in 2016, when the Tsai Ing Wen administration began again, the premier Lin Chuan was also independent. So we are after 2014-15, at a new phase of our democracy where it becomes normal for me to say, Oh, I don't belong to any parties but I work with all the parties. That credible neutrality, this kind of bridge making across parties, becomes something people expect the administration to do. And again, we don't see that much of this kind of bridge making action in other advanced democracies.

CINDY COHN
You know, I had this question and, and I know that one of our supporters did as well, which is, what's your view on, you know, kind of hackers? And, and by saying hackers here, I mean people with deep technical understanding. Do you think that they can have more impact by going into government than staying in private industry? Or how do you think about that? Because obviously you made some decisions around that as well.

AUDREY TANG
So my job description basically implies that I'm not working for the government. I'm just working with the government. And not for the people, but with the people. And this is very much in line with the internet governance technical community, right? The technical community within the internet governance communities kind of places ourselves as a hub between the public sector, the private sector, even the civil society, right?
So, the dot net suffix is something else. It is something that includes dot org, dot com, dot edu, dot gov, and even dot military, together into a shared fabric so that people can find rough consensus. And running code, regardless of which sector they come from. And I think this is the main gift that the hacker community gives to modern democracy, is that we can work on the process, but the process or the mechanism naturally fosters collaboration.

CINDY COHN
Obviously whenever you can toss rough consensus and running code into a conversation, you've got our attention at EFF because I think you're right. And, and I think that the thing that we've struggled with is how to do this at scale.
And I think the thing that's so exciting about the work that you're doing is that you really are doing a version of. transparency, rough consensus, running code, and finding commonalities at a scale that I would say many people weren't sure was possible. And that's what's so exciting about what you've been able to build.

JASON KELLEY
I know that before you joined with the government, you were a civic hacker involved in something called gov zero. And I'm wondering, maybe you can talk a little bit about that and also help people who are listening to this podcast think about ways that they can sort of follow your path. Not necessarily everyone can join the government to do these sorts of things, but I think people would love to implement some of these ideas and know more about how they could get to the position to do so.

AUDREY TANG
Collaborative diversity works not just in the dot gov, but if you're working in a large enough dot org or dot com, it all works the same, right? When I first discovered the World Wide Web, I learned about image tags, and the first image tag that I put was the Blue Ribbon campaign. And it was actually about unifying the concerns of not just librarians, but also the hosting companies and really everybody, right, regardless of their suffix. We saw their webpages turning black and there's this prominent blue ribbon at a center. So by making the movement fashionable across sectors, you don't have to work in the government in order to make a change. Just open source your code and somebody In the administration, that's also a civic hacker will notice and just adapt or fork, or merge your code back.
And that's exactly how Gov Zero works. In 2012 a bunch of civic hackers decided that they've had enough with PDF files that are just image scans of budget descriptions, or things like that, which makes it almost impossible for average citizens to understand what's going on with the Ma Ying-jeou administration.And so, they set up forked websites.
So for each website, something dot gov dot tw, the civic hackers register something dot g0v dot tw, which looks almost the same. So, you visit a regular government website, you change your O to a zero, and this domain hack ensures that you're looking at a shadow government versions of the same website, except it's on GitHub, except it’s powered by open data, except there's real interactions going on and you can actually have a conversation about any budget item around this visualization with your fellow civic hackers.
And many of those projects in Gov Zero became so popular that the administration, the ministries finally merged back their code so that if you go to the official government website, it looks exactly the same as the civic hacker version.

CINDY COHN
Wow. That is just fabulous. And for those who might be a little younger, the Blue Ribbon Campaign was an early EFF campaign where websites across the internet would put a blue ribbon up to demonstrate their commitment to free speech. And so I adore that that was one of the inspirations for the kind of work that you're doing now. And I love hearing these recent examples as well, that this is something that really you can do over and over again.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

TIME magazine recently featured Audrey Tang as one of the 100 most influential people in AI and one of the projects they mentioned is Alignment Assemblies, a collaboration with the Collective Intelligence Project policy organization that employs a chatbot to help enable citizens to weigh in on their concerns around AI and the role it should play.

AUDREY TANG
So it started as just a Polis survey of the leaders at the Summit for Democracy and AI labs and so on on how exactly are their concerns bridge-worthy when it comes to the three main values identified by the Collective Intelligence Project, which is participation, progress and safety. Because at the time, the conversation because of the GPT4 and its effect on everybody's mind, we hear a lot of strong trade-off arguments like to maximize safety, we have to, I don't know, restrict GPU Purchasing across the world to put a cap on progress or we hear that for to make open source possible we must give up the idea of the AI's aligning themselves, but actually having the uncensored model be like personal assistant so that everybody has one so that people become inoculated against deepfakes because everybody can very easily deepfake and so on.
And we also hear that maybe internet communication will be taken over by deepfakes. And so we will have to reintroduce some sort of real name internet because otherwise everybody will be a bot on the internet and so on. So all these ideas really push over the window, right? Because before generative AI, these ideas were considered fringe.
And suddenly, at the end of March this year, those ideas again gained prominent ground. So using Polis and using TalkToTheCity and other tools, we quickly mapped an actually overlapping consensus. So regardless of which value you come from, people generally understand that if we don't tackle the short term risks - the interactive deepfakes, the persuasion and addiction risks, and so on - then we won't even coordinate enough to live together to see the coordination around the extinction risks a decade or so down the line, right?
So we have to focus on the immediate risks first, and that led to the safe dot ai joint statement, which I signed, and also the Mozilla open and safety joint statement which I signed and so on.
So the bridge-making AI actually enabled a sort of deep canvassing where I can take all the sides and then make the narratives that bridges the three very different concerns. So it's not a trilemma, but rather reinforcing each other mutually. And so in Taiwan, a surprising consensus that we got from the Polis conversations and the two face-to-face day-long workshops, was that people in Taiwan want the Taiwanese government to pioneer this use of trustworthy AI.
So instead of the private sector producing the first experiences, they want the public servants to exercise their caution of course, but also to use gen AI in the public service. But with one caveat that this must be public code, that is to say, it should be free software, open source, the way it integrates into decision making should be an assistive role and everything need to be meticulously documented so the civil society can replicate it on their own personal computers and so on. And I think that's quite insightful. And therefore, we're actually doubling down on the societal evaluation and certification. And we're setting up a center for that at the end of this year.

CINDY COHN
So what are some of the lessons and things that you've learned in doing this in Taiwan that you think, you know, countries around the world or people around the world ought to take back and, and think about how they might implement it?
Are there pitfalls that you might want to avoid? Are there things that you think really worked well that people ought to double down on?

AUDREY TANG
I think it boils down to two main observations. The first one is that credible neutrality and alignment with the career public service is very, very important. The political parties come and go, but a career public service is very aligned with the civic hackers' kind of thinking because they maintain the mechanism.
They want the infrastructure to work and they want to serve people who belong to different political party. It doesn't matter because that's what a public service does. It serves the public. And so for the first few years of the Gov Zero movement the projects found not just natural allies in the Korean public service, but also the credibly neutral institutions in our society.
For example, our National Academy which doesn't report to the ministers, but rather directly to the president is widely seen as credibly neutral. And so civil society organizations can play such a role equally effectively if they work directly with the people, not just for the policy think tanks and so on.
So one good example may be like consumer report in the U. S. or the National Public Radio, and so on. So, basically, these are the mediators that are very similar to us, the civic hackers, and we need to find allies in them. So this is the first observation. And the second observation is that you can turn any crisis that urgently need clarity into an opportunity to future mechanisms that works better.
So if you have the civil society trust in it and the best way to win trust is to give trust. So by simply saying the opposition party, everyone has the real time API of the open data, and so if you make a critique of our policy, well, you have the same data as we do. So patches welcome, send us pull requests, and so on. This turns what used to be a zero sum or negative sum dynamic in politics thanks to a emergency like pandemic or infodemic and turned it into a co-creation opportunity and the resulting infrastructure become so legitimate that no political parties will dismantle it. So it become another part of political institution.
So having this idea of digital public infrastructure and ask for the parliament to give it infrastructure, money and investment, just like building parks and roads and highways. This is also super important.
So when you have a competent society, when we focus on not just the literacy, but competence of everyday citizens, they can contribute to public infrastructures through civic infrastructures. So credible neutrality on one and public and civic infrastructure as the other, I think these two are the most fundamental, but also easiest to practice way to introduce this plurality idea to other polities.

CINDY COHN
Oh, I think these are great ideas. And it reminds me a little of what we learned when we started doing electronic voting work at EFF. We learned that we needed to really partner with the people who run elections.
We were aligned that all of us really wanted to make sure that the person with the most votes was actually the person who won the election. But we started out a little adversarial and we really had to learn to flip that around. Now that’s something that our friends at Verified Voting have really figured out and have build some strong partnerships. But I suspect in your case it could have been a little annoying to officials that you were creating these shadow websites. I wonder, did it take a little bit of a conversation to flip them around to the situation in which they embraced it?

AUDREY TANG
I think the main intervention that I personally did back in the days when I run the MoEdDict, or the Ministry of Education Dictionary project, in the Gov Zero movement, was that we very prominently say, that although we reuse all the so-called copyright reserve data from the Ministry of Education, we relinquish all our copyright under the then very new Creative Commons 0, so that they cannot say that we're stealing any of the work because obviously we're giving everything back to the public.
So by serving the public in an even more prominent way than the public service, we make ourselves not just the natural allies, but kind of reverse mentors of the young people who work with cabinet ministers. But because we serve the public better in some way, they can just take entire website design, the entire Unicode, interoperability, standard conformance, accessibility and so on and simply tell their vendors, and say, you know, you can merge it. You don't have to pay these folks a dime. And naturally then the service increases and they get praise from the press and so on. And that fuels this virtuous cycle of collaboration.

JASON KELLEY
One thing that you mentioned at the beginning of our conversation that I would love to hear more about is the idea of radical transparency. Can you talk about how that shows up in your workflow in practice every day? Like, do you wake up and have a cabinet meeting and record it and transcribe it and upload it? How do you find time to do all that? What is the actual process?

AUDREY TANG
Oh I have staff of course. And also, nowadays, language models. So the proofreading language models are very helpful. And I actually train my own language models. Because the pre-training of all the leading large language models already read from the seven years or so of public transcript that I published.
So they actually know a lot about me. In fact, when facilitating the chatbot conversations, one of the more powerful prompts we discovered was simply, facilitate this conversation in the manner of Audrey Tang. And then language model actually know what to do because they've seen so many facilitative transcripts.

CINDY COHN
Nice! I may start doing that!

AUDREY TANG
It's a very useful elicitation prompt. And so I train my local language model. My emails, especially English ones, are all drafted by the local model. And it has no privacy concern because it runs in airplane mode. The entire fine tuning inference. Everything is done locally and so while it does learn from my emails and so on, I always read fully before hitting send.
But this language model integration of personal computing already saved, I would say 90 percent of my time, during daily chores, like proofreading, checking transcripts, replying to emails and things like that. And so I think one of the main arguments we make in the cabinet is that this kind of use of what we call local AI, edge AI, or community open AI, are actually better to discover the vulnerabilities and flaws and so on, because then the public service has a duty to ensure the accuracy and what better way to ensure accuracy of language model systems than integrating it in the flow of work in a way that doesn't compromise privacy and personal data protection. And so, yeah, AI is a great time saver, and we're also aligning AI as we go.
So for the other ministries that want to learn from this radical transparency mechanism and so on, we almost always sell it as a more secure and time saving device. And then once they adopt it, then they see the usefulness of getting more public input and having a language model to digest the collective inputs and respond to the people in the here and now.

CINDY COHN
Oh, that is just wonderful because I do know that when you start talking with public servants about more public participation, often what you get is, Oh, you're making my job harder. Right? You're making more work for me. And, and what you've done is you've kind of been able to use technology in a way that actually makes their job easier. And I think the other thing I just want to lift up in what you said, is how important it is that these AI systems that you're using are serving you. And it's one of the things we talk about a lot about the dangers of AI systems, which is, who bears the downside if the AI is wrong?
And when you're using a service that is air gapped from the rest of the internet and it is largely using to serve you in what you're doing, then the downside of it being wrong doesn't go on, you know, the person who doesn't get bail. It's on you and you're in the best position to correct it and actually recognize that there's a problem and make it better.

AUDREY TANG
Exactly. Yeah. So I call these AI systems assistive intelligence, after assistive technology because it empowers the dignity of me, right? I have this assistive tech, which is a bunch of eyeglasses. It's very transparent, and if I see things wrong after putting those eyeglasses, nobody blamed the eyeglasses.
It's always the person that is empowered by the eyeglasses. But if instead I wear not eyeglasses, but those VR devices that consumes all the photons, upload it to the cloud for some very large corporation to calculate and then project back to my eyes and maybe with some advertisement in it and so on, then it's very hard to tell whether the decision making falls on me or on those intermediaries that basically blocks my eyesight and just present me a alternate reality. So I always prefer things that are like eyeglasses, or bicycles for that matter that someone can repair it themselves, without violating an NDA or paying $3 million in license fees.

CINDY COHN
That's great. And open source for the win again there. Yeah.

AUDREY TANG
Definitely.

CINDY COHN
Yeah, well thank you so much, Audrey. I tell you, this has been kind of like a breath of fresh air, I think, and I really appreciate you giving us a glimpse into a world in which, you know, the values that I think we all agree on are actually being implemented and implementing, as you said, in a way that scales and makes things better for ordinary people.

AUDREY TANG
Yes, definitely. I really enjoy the questions as well. Thank you so much. Live long and prosper.

JASON KELLEY
Wow. A lot of the time we talk to folks and it's hard to get to a vision of the future that we feel positive about. And this was the exact opposite. I have rarely felt more positively about the options for the future and how we can use technology to improve things and this was just - what an amazing conversation. What did you think, Cindy?

CINDY COHN
Oh I agree. And the thing that I love about it is, she’s not just positing about the future. You know, she’s telling us stories that are 10 years old about how they fix things in Taiwan. You know, the Uber story and some of the other stories of the Sunflower movement. She didn't just, like, show up and say the future's going to be great, like, she's not just dreaming, They're doing.

JASON KELLEY
Yeah. And that really stood out to me when talking about some of the things that I expected to get more theoretical answers to, like, what do you mean when you say democracy is a technology and the answer is quite literally that democracy suffers from a lack of bandwidth and latency and the way that it takes time for individuals to communicate with the government can be increased in the same way that we can increase bandwidth and it was just such a concrete way of thinking about it.
And another concrete example was, you know, how do you get involved in something like this? And she said, well, we just basically forked the website of the government with a slightly different domain and put up better information until the government was like, okay, fine, we'll just incorporate it. These are such concrete things that people can sort of understand about this. It's really amazing.

CINDY COHN
Yeah, the other thing I really liked was pointing out how, you know, making government better and work for people is really one of the ways that we counter authoritarianism. She said one of the arguments in favor of authoritarianism is that it's more efficient, and it can get things done faster than a messy, chaotic, democratic process.
And she said, well, you know, we just fixed that so that we created systems in which democracy was more efficient. than authoritarianism. And she talked a lot about the experience they had during COVID. And the result of that being that they didn't have a huge misinformation problem or a huge anti-vax community in Taiwan because the government worked.

JASON KELLEY
Yeah that's absolutely right, and it's so refreshing to see that, that there are models that we can look toward also, right? I mean, it feels like we're constantly sort of getting things wrong, and this was just such a great way to say, Oh, here's something we can actually do that will make things better in this country or in other countries,
Another point that was really concrete was the technology that is a way of twisting algorithms around instead of surfacing disagreements, surfacing agreements. The Polis idea and ways that we can make technology work for us. There was a phrase that she used which is thinking of algorithms and other technologies as assistive. And I thought that was really brilliant. What did you think about that?

CINDY COHN
I really agree. I think that, you know, building systems that can surface agreement as opposed to doubling down on disagreement seems like so obvious in retrospect and this open source technology, Polis has been doing it for a while, but I think that we really do need to think about how do we build systems that help us build towards agreement and a shared view of how our society should be as opposed to feeding polarization. I think this is a problem on everyone's mind.
And, when we go back to Larry Lessig's four pillars, here's actually a technological way to surface agreement. Now, I think Audrey's using all of the pillars. She's using law for sure. She's using norms for sure, because they're creating a shared norm around higher bandwidth democracy.
But really you know in her heart, you can tell she's a hacker, right? She's using technologies to try to build this, this shared world and, and it just warms my heart. It's really cool to see this approach and of course, radical openness as part of it all being applied in a governmental context in a way that really is working far better than I think a lot of people believe could be possible.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode you heard reCreation by airtone, Kalte Ohren by Alex featuring starfrosch and Jerry Spoon, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.
You can find links to their music in our episode notes, or on our website at eff.org/podcast.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
I hope you’ll join us again soon. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

❌