Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Podcast Episode: Love the Internet Before You Hate On It

Par : Josh Richman
21 mai 2025 à 03:01

There’s a weird belief out there that tech critics hate technology. But do movie critics hate movies? Do food critics hate food? No! The most effective, insightful critics do what they do because they love something so deeply that they want to see it made even better. The most effective tech critics have had transformative, positive online experiences, and now unflinchingly call out the surveilled, commodified, enshittified landscape that exists today because they know there has been – and still can be – something better.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

That’s what drives Molly White’s work. Her criticism of the cryptocurrency and technology industries stems from her conviction that technology should serve human needs rather than mere profits. Whether it’s blockchain or artificial intelligence, she’s interested in making sure the “next big thing” lives up to its hype, and more importantly, to the ideals of participation and democratization that she experienced. She joins EFF’s Cindy Cohn and Jason Kelley to discuss working toward a human-centered internet that gives everyone a sense of control and interaction – open to all in the way that Wikipedia was (and still is) for her and so many others: not just as a static knowledge resource, but as something in which we can all participate. 

In this episode you’ll learn about:

  • Why blockchain technology has built-in incentives for grift and speculation that overwhelm most of its positive uses
  • How protecting open-source developers from legal overreach, including in the blockchain world, remains critical
  • The vast difference between decentralization of power and decentralization of compute
  • How Neopets and Wikipedia represent core internet values of community, collaboration, and creativity
  • Why Wikipedia has been resilient against some of the rhetorical attacks that have bogged down media outlets, but remains vulnerable to certain economic and political pressures
  • How the Fediverse and other decentralization and interoperability mechanisms provide hope for the kind of creative independence, self-expression, and social interactivity that everyone deserves  

Molly White is a researcher, software engineer, and writer who focuses on the cryptocurrency industry, blockchains, web3, and other tech in her independent publication, Citation Needed. She also runs the websites Web3 is Going Just Great, where she highlights examples of how cryptocurrencies, web3 projects, and the industry surrounding them are failing to live up to their promises, and Follow the Crypto, where she tracks cryptocurrency industry spending in U.S. elections. She has volunteered for more than 15 years with Wikipedia, where she serves as an administrator (under the name GorillaWarfare) and functionary, and previously served three terms on the Arbitration Committee. She’s regularly quoted or bylined in news media, speaks at major conferences including South by Southwest and Web Summit; guest lectures at universities including Harvard, MIT, and Stanford; and advises policymakers and regulators around the world. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

MOLLY WHITE: I was very young when I started editing Wikipedia. I was like 12 years old, and when it said the encyclopedia that anyone can edit, “anyone” means me, and so I just sort of started contributing to articles and quickly discovered that there was this whole world behind Wikipedia that a lot of us really don't see, where very passionate people are contributing to creating a repository of knowledge that anyone can access.
And I thought, I immediately was like, that's brilliant, that's amazing. And you know that motivation has really stuck with me since then, just sort of the belief in open knowledge and open access I think has, you know, it was very early for me to be introduced to those things and I, I sort of stuck with it, because it became, I think, such a formative part of my life.

CINDY COHN: That’s Molly White talking about a moment that is hopefully relatable to lots of folks who think critically about technology – that moment when you first experienced how, sometimes, the internet can feel like magic.
I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. A big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.

JASON KELLEY: Our guest today is Molly White. She’s a journalist and web engineer, and is one of the strongest voices thinking and speaking critically about technology–specifically, she’s been an essential voice on cryptocurrency and what people often call Web3–usually a reference to blockchain technologies.. She runs an independent online newsletter called Citation Needed, and at her somewhat sarcastically named website “Web3 is going just great” she chronicles all the latest, often alarming, news, often involving scams and schemes that make those of us working to improve the internet pull our hair out.

CINDY COHN: But she’s not a pessimist. She comes from a deep love of the internet, but is also someone who holds the people that are building our digital world to account, with clear-eyed explanations of where things are going wrong, but also potential that exists if we can do it right. Welcome, Molly. Thanks for being here.

MOLLY WHITE: Thank you for having me.

CINDY COHN: So the theme of our show is what does it look like if we start to get things right in the digital world? Now you recognize, I believe, the value of blockchain technologies, what they could be.
But you bemoan how far we are from that right now. So let's start there. What does the world look like if we start to use the blockchain in a way that really lives up to its potential for making things better online?

MOLLY WHITE: I think that a lot of the early discussions about the potential of the blockchain were very starry-eyed and sort of utopian. Much in the way that early discussions of the internet were that way. You know, they promised that blockchains would somehow democratize everything we do on the internet, you know, make it more available to anyone who wanted to participate.
It would provide financial rails that were more equitable and had fewer rent seekers and intermediaries taking fees along the way. A lot of it was very compelling.
But I think as we've seen the blockchain industry, such as it is now, develop, we've seen that this technology brings with it a set of incentives that are incredibly challenging to grapple with. And it's made me wonder, honestly, whether blockchains can ever live up to the potential that they originally claimed, because those incentives have seemed to be so destructive that much of the time any promises of the technology are completely overshadowed by the negatives.

CINDY COHN: Yeah. So let's talk a little bit about those incentives, 'cause I think about that a lot as well. Where do you see those incentives popping up and what are they?

MOLLY WHITE: Well, any public blockchain has a token associated with it, which is the cryptocurrency that people are trading around, speculating on, you know, purchasing in hopes that the number will go up and they will make a profit. And in order to maintain the blockchain, you know, the actual system of records that is storing information or providing the foundation for some web platform, you need that cryptocurrency token.
But it means that whatever you're trying to do with the blockchain also has this auxiliary factor to it, which is the speculation on the cryptocurrency token.
And so time and time again, watching this industry and following projects, claiming that they will do wonderful, amazing things and use blockchains to accomplish those things, I've seen the goals of the projects get completely warped by the speculation on the token. And often the project's goals become overshadowed by attempts to just pump the price of the token, in often very inauthentic ways or in ways that are completely misaligned with the goals of the project. And that happens over and over and over again in the blockchain world.

JASON KELLEY: Have you seen that not happen with any project? Is there any project that you've said, wow, this is actually going well. It's like a perfect use of this technology, or because you focus on sort of the problems, is that just not something you've come across?

MOLLY WHITE: I think where things work well is when those incentives are perfectly aligned, which is to say that if there are projects that are solely focused on speculation, then the blockchain speculation works very well. Um, you know, and so we see people speculating on Bitcoin, for example, and, and they're not hoping necessarily that the Bitcoin ledger itself will do anything.
The same is true with meme coins. People are speculating on these tokens that have no purpose behind them besides, you know. Hoping that the price will go up. And in that case, you know, people sort of know what they're getting into and it can be lucrative for some people. And for the majority of people it's not, but you know, they sort of understand that going into it, or at least you would hope that they do.

CINDY COHN: I think of the blockchain as, you know, when they say this'll go down on your permanent record, this is the permanent record.

MOLLY WHITE: That’s usually a threat.

CINDY COHN: Yeah.

MOLLY WHITE: I try to point that out as well.

CINDY COHN: Now, you know, look, to be clear, we work with people who do international human rights work saving the records before a population gets destroyed in a way that that can't be destroyed by the people in power is, is, is one of the kind of classic things that you want a secure, permanent place to store stuff, um, happens. And so there's, you know, there's that piece. So where do you point people to when you're thinking about like, okay, what if you want a real permanent record, but you don't want all the dreck of the cryptocurrency blockchain world?

MOLLY WHITE: Well, it really depends on the project. And I really try to emphasize that because I think a lot of people in the tech world come at things somewhat backwards, especially when there is a lot of hype around a technology in the way that there was with blockchains and especially Web3.
And we saw a lot of people essentially saying, I wanna do something with a blockchain. Let me go find some problem I can solve using a blockchain, which is completely backwards to how most technologists are used to addressing problems, right? They're faced with a problem. They consider possible ways to solve it, and then they try to identify a technology that is best suited to solving that problem.
And so, you know, I try to encourage people to reverse the thinking back to the normal way of doing things where, sure, you might not get the marketing boosts that Web3 once brought in. And, you know, it certainly it was useful to attract investors for a while to have that attached to your project, but you will likely end up with a more sustainable product at the end of the day because you'll have something that works and is using technology that is well suited to the problem. And so, you know, when it comes to where would I direct people other than blockchains, it very much depends on their problem and, and the problem that they're trying to solve.
For example, if you don't need to worry about having a, a system that is maintained by a group of people who don't trust each other, which is the blockchain’s sort of stated purpose, then there are any number of databases that you can use that work in the more traditional manner where you rely on perhaps a group of trusted participants or a system like that.
If you're looking for a more distributed or decentralized solution, there are peer-to-peer technologies that are not blockchain based that allow this type of content sharing. And so, you know, like I said, it really just depends on the use case more than anything.

JASON KELLEY: Since you brought up decentralization, this is something we talk about a lot at EFF in different contexts, and I think a lot of people saw blockchain and heard decentralized and said, that sounds good.
We want less centralized power. But where do you see things like decentralization actually helping if this kind of Web3 tech isn't a place where it's necessarily useful or where the technology itself doesn't really solve a lot of the problems that people have said it would.

MOLLY WHITE: I think one of the biggest challenges with blockchains and decentralization is that when a lot of people talk about decentralization, they're talking about the decentralization of power, as you've just mentioned, and in the blockchain world, they're often talking about the decentralization of compute, which is not necessarily the same thing, and in some cases it very much different.

JASON KELLEY: If you can do a rug pull, it's not necessarily decentralized. Right?

MOLLY WHITE: Right. Or if you're running a blockchain and you're saying it's decentralized, but you run all of the validators or the miners for that blockchain, then you, you know, the computers may be physically located all over the world, or, you know, decentralized in that sort of sense. But you control all the power and so you do not have a truly decentralized system in that manner of speaking.
And I think a lot of marketing in the crypto world sort of relied on people not considering the difference between those two things, because there are a lot of crypto projects that, you know, use all of the buzzwords around decentralization and democratization and, you know, that type of thing that are very, very centralized, very similar to the traditional tech companies where, you know, all of Facebook servers may be located physically all around the world, but no one's under the. The impression that Facebook is a decentralized company. Right? And so I think that's really important to remember is that there's nothing about blockchain technology specifically that requires a blockchain project to be decentralized in terms of power.
It still requires very intentional decision making on the parts of the people who are running that project to decentralize the power and reduce the degree to which any one entity can control the network. And so I think that there is this issue where people sort of see blockchains and they think decentralized, and in reality you have to dig a lot deeper.

CINDY COHN: Yeah, EFF has participated in a couple of the sanctions cases and the prosecutions of people who have developed peace. Is of the blockchain world especially around mixers. TornadoCash is one that we participated in, and I think this is an area where we have a kind of similar view about the role of the open source community and kind of the average coder and when their responsibility should create liability and when they should be protected from liability.
And we've tried to continue to weigh in on these cases to make sure the courts don't overstep, right? Because the prosecution gets so mad. You're talking about a lot of money laundering and, and things like that, that the, you know, the prosecution just wants to throw the book at everybody who was ever involved in these kinds of things and trying to create this space where, you know, a coder who just participates in a GitHub developing some piece of code doesn't have a liability risk.
And I think you've thought about this as well, and I'm wondering, do you see the government overstepping and do you think it's right to continue to think about that, that overstepping?

MOLLY WHITE: Yeah, I mean, I think it's that those are the types of questions that are really important when it comes to tackling problems around blockchains and cryptocurrencies and the financial systems that are developing around these products.
Tou have to be really cautious that, you know, just because a bad thing is happening, you don't come in with a hammer that is, you know, much too big and start swinging it around and hitting sort of anyone in the vicinity because, you know, I think there are some things that should absolutely be protected, like, you know, writing software, for example.
A person who writes software should not necessarily be liable for everything that another person then goes and does with that software. And I think that's something that's been fairly well established through, you know, cryptography cases, for example, where people writing encryption algorithms and software to do strong encryption should not be considered liable for whatever anyone encrypts with that technology. We've seen it with virus writers, you know, it would be incredibly challenging for computer scientists to research and sort of think about new viruses and protect against vulnerabilities if they were not allowed to write that software.
But, you know, if they're not going and deploying this virus on the world or using it to, you know, do a ransomware attack, then they probably shouldn't be held liable for it. And so similar questions are coming up in these cryptocurrency cases or these cases around cryptocurrency mixers that are allowing people to anonymize their transactions in the crypto world more adequately.
And certainly that is heavily used in money laundering and in other criminal activities that are using cryptocurrencies. But simply writing the software to perform that anonymization is not itself, I think, a crime. Especially when there are many reasons you might want to anonymize your financial transactions that are otherwise publicly visible to anyone who wishes to see them, and, you know, can be linked to you if you are not cautious about your cryptocurrency addresses or if you publish them yourself.
And so, you know, I've tried to speak out about that a little bit because I think a lot of people see me as, you know, a critic of the cryptocurrency world and the blockchain world, and I think it should be banned or that anyone trading crypto should be put in jail or something like that, which is a very extreme interpretation of my beliefs and is, you know, absolutely not what I believe. I think that, you know, software engineers should be free to write software and then if someone takes that software and commits a crime with it, you know, that is where law enforcement should begin to investigate. Not at the, you know, the software developer's computer.

CINDY COHN: Yeah, I just think that's a really important point. I think it's easy, especially because there's so much fraud and scam and abuse in this space, to try to make sure that we're paying attention to where are we setting the liability rules because even if you don't like cryptocurrency or any of those kinds of things, like protecting anonymity is really important.
It's kind of a function of our times right now where people are either all one or all the other. And I really have appreciated, as you've kind of gone through this, thinking about a position that protects the things that we need to protect, even if we don't care about 'em in this context, because we might in another, and law of course, is kind of made up of things that get set in one context and then applied in another, while at the same time being, you know, kind of no holds barred, critical of the awful stuff that's going on in this world.

JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Molly White

JASON KELLEY: Some of the technologies you're talking about when sort of separated out from, maybe, the hype or the negatives that have like, overtaken the story. Things like peer-to-peer file sharing, cryptography. I mean, even, let's say, being able to send money to someone, you know, with your phone, if you want to call it that, are pretty incredible at some level, you know?
And you gave a talk in October that was about a time that you felt like the web was magic and you brought up a, a website that I'm gonna pretend that I've never heard of, so you can explain it to me, called Neopets. And I just wanna, for the listeners, could you explain a little bit about what Neopets was and sort of how it helped inform you about the way you want the web to work and, and things like that?

MOLLY WHITE: Yeah, so Neopets was a kids game. When I was a kid, you could adopt these little cartoon pets and you could like feed them and change their colors and do things, you know, play little games with them.

JASON KELLEY: Like Tamagotchis a little bit,

MOLLY WHITE: a little bit. Yeah. Yeah. There was also this aspect to the website where you could edit your user page and you could create little webpages in your account that were, it was pretty freewheeling, you know, you could edit the CSS and the HTML and you could make your own little website essentially. And as a kid that was really my first exposure to the idea that the internet and these websites that I was seeing, you know, sort of for the first time were not necessarily a read-only operation. You know, I was used to playing maybe little games on the internet  whatever kids were doing on the internet at the time.
And Neopets was really my first realization that I could add things to the internet or change the way they looked or interact with it in a way that was, you know, very participatory. And that later sort of turned into editing Wikipedia and then writing software and then publishing my writing on the web.
And that was really magical for me because it sort of informed me about the platform that was in front of me and how powerful it was to be able to, you know, edit something, create something, and then the whole world could see it.

JASON KELLEY: There's a really common critique right now that young people are sort of learning only bad things online or like only overusing the internet. And I mean, first of all, I think that's obviously not true. You know, every circumstance is different, but do you see places where like the way you experienced the internet growing up are still happening for young people?

MOLLY WHITE: Yeah, I mean, I think a lot of those, as you mentioned, I think a lot of those critiques are very misguided and they miss a lot of the incredibly powerful and positive aspects of the internet. I mean, the fact that you can go look something up and learn something new in half a second, is revolutionary. But then I think there are participatory examples, much like what I was experiencing when I was younger. You know, people can still edit Wikipedia the way that I was doing as a kid. That is a very powerful thing to do when you're young, to realize that knowledge is not this thing that is handed down from on high from some faceless expert who wrote history, but it's actually something that people are contributing to and improving constantly. And it can always be updated and improved and edited and shared, you know, in this sort of free and open way. I think that is incredibly powerful and is still open to people of any age who are, you know, able to access the site.

JASON KELLEY: I think it's really important to bring up some of these examples because something I've been thinking about a lot lately as these critiques and attacks on young people using the internet have sort of grown and even, you know, entered the state and congressional level in terms of bills, is that a lot of the people making these critiques, I feel like never liked the internet to begin with. They don't see it as magic in the way that I think you do and that, you know, a lot of our listeners do.
And it's a view that is a problem specifically because I feel like you have to have loved the internet before you can hate it. You know, like, it's not like you need to really be able to critique the specific things rather than just sort of throw out the whole thing. And one of the things you know, I like about the work that you do is that you clearly have this love for technology and for the internet, and that lets you, I think, find the problems. And other people can't see through into those specific individual issues. And so they just wanna toss the whole thing.

MOLLY WHITE: I think that's really true. I think that, you know, I think there is this weird belief, especially around tech critics, that tech critics hate technology. It's so divorced from reality because, you don't see that in other worlds where, you know, art critics are never told that they just hate all art. I think most people understand that art critics love art and that's why they are critics.
But with technology critics, there's sort of this weird, you know, this perception of us as people who just hate technology, we wanna tear it all down when in reality, you know, I know a lot of tech critics and, and most of us, if not all of us, that I can think of come from a, you know, a background of loving technology often from a very young age, and it is because of that love and the want to see technology to continue to allow people to have those transformative experiences that we criticize it.
And that's, for some reason, just a hard thing, I think for some people to wrap their minds around.

JASON KELLEY: I want to talk a little bit more about Wikipedia, the whole sort of organization and what it stands for and what it does has been under attack a lot lately as well. Again, I think that, you know, it's a lot of people misunderstanding how it works and, and, um, you know, maybe finding some realistic critiques of the fact that, that, you know, it's individually edited, so there's going to be some bias in some places and things like that, and sort of extrapolating out when they have a good faith argument to this other place. So I'm wondering if you've thought about how to protect Wikipedia, how to talk about it. How you know your experience with it has made you understand how it works better than most people.
And also just generally, you know how it can be used as a model for the way that the internet should be or the way we can build a better internet.

MOLLY WHITE: I think this ties back a little bit to the decentralization topic where Wikipedia is not decentralized in the sense that, you know, there is one company or one nonprofit organization that controls all the servers. And so there is this sort of centralization of power in that sense. But it is very decentralized in the editing world where there is no editorial board that is vetting every edit to the site.
There are, you know, numerous editors that contribute to any one article and no one person has the final say. There are different language versions of Wikipedia that all operate somewhat independently. And because of that, I think it has been challenging for people to attack it successfully.
Certainly there have been no shortage of those attacks. Um, but you know, it's not a company that someone could buy out and take over in ways that we've seen, you know, for example Elon Musk do with Twitter. There is no sort of editorial board that can be targeted and sort of pressured to change the language on the site. And, you know, I think that has helped to make Wikipedia somewhat resilient in ways that we've seen news organizations or other media publications struggle with recently where, you know, they have faced pressure from their buyers. The, you know, the people who own those organizations to be sure.
They've faced, you know, threats from the government in some cases. And Wikipedia is structured somewhat differently that I think helps us remain more protected from those types of attacks. But, you know, I, I am cautious to note that, you know, there are still vulnerabilities.
The attacks on Wikipedia need to be vociferously opposed. And so we have to be very cautious about this because the incredible resource that Wikipedia is, is is something that doesn't just sort of happen in a vacuum, you know, outside of any individual's actions.
It requires constant support, constant participation, constant editing. And so, you know, it's certainly a model to look to in terms of how communities can organize and contribute to, um, you know, projects on the internet. But it's also something that has to be very carefully maintained.

CINDY COHN: Yeah, I mean, this is just a lesson for our times, right? You know, there isn't a magical tech that can protect against all attacks. And there isn't a magical, you know, nonprofit 501-C3 that can be resistant against all the attacks. And we're in a time where they're coming fast and furious against our friends at Wikimedia, along with a lot of other, other things.
And I think the impetus is on communities to show up and, and, you know, not just let these things slide or think that, you know, uh, the internet might be magic in some ways, but it's not magic in these ways. Like we have to show up and fight for them. Um, I wanted to ask you a little bit about, um, kind of big tech's embrace of AI.
Um, you've been critical of it. We've been critical of it as well in many ways. And, and I, I wanna hear kind of your concerns about it and, um, and, and kind of how you see AI’s, you know, role in a better world. But, you know, also think about the ways in which it's not working all that well right now.

MOLLY WHITE: I generally don't have this sort of hard and fast view of AI is good or AI is bad, but it really comes down to how that technology is being used. And I think the widespread use of AI in ways that exploit workers and creatives and those who have decided to publish something online for example, and did not expect for that publication to be used by big tech companies that are then profiting off of it, that is incredibly concerning. Um, as well as the ways that AI is marketed to people. Again, this sort of mirrors my criticism, surround the crypto industry, but a lot of the marketing around AI is incredibly misleading. Um, you know, they're making promises that are not born out in reality.
They are selling people a product that will lie to you, you know, that will tell you things that are inaccurate. So I have a lot of concerns around AI, especially as we've seen it being used in the broadest, and sort of by the largest companies. But you know, I also acknowledge that there are ways in which some of this technology has been incredibly useful. And so, you know, it is one of these things where it has to be viewed with nuance, I think, around the ways it's being developed, the ways it's being deployed, the ways it's being marketed.

CINDY COHN: Yeah, there is a, a kinda eerie familiarity around the hype around AI and the hype around crypto. That, it's just kind of weird. It feels like we're going through like a, you know, groundhog day. Like we're living through the, another hype cycle that feels like the last, I think, you know, for us at EFF, we're really, we, we've tried to focus a lot on governmental use of AI's systems and AI systems that are trying to predict human behavior, right?
The digital equivalent of phrenology right? You know, let us, let us do sentiment analysis on the things that you said, and that'll tell us whether you're about to be a criminal or, you know, the right person for the job. I think those are the places that we've really identified, um, as, you know, problematic on a number of levels. You know, number one, it, it doesn't work nearly as well as,

MOLLY WHITE: That is a major problem!

CINDY COHN: It seems like that ought to be number one. Right. And this, you know, especially spending your time in Wikipedia where you're really working hard to get it right. Right. And you see the kind of back and forth of the conversation. But the, the central thing about Wikipedia is it's trying to actually give you truthful information and then watching the world get washed over with these AI assistants that are really not at all focused on getting it right, you know, or really focused on predicting the next word or, or however that works, right. Like, um, it's gotta be kind of strange from where you sit, I suspect, to see this.

MOLLY WHITE: Yeah, it's, it's very frustrating. And, you know, I, I like to think we lived in a world at one time where people wanted to produce technology that helped people, technology that was accurate, technology that worked in the ways that they said it did. And it's been very weird to watch, especially over the last few years that sort of, uh, those goals degrade where, well, maybe it's okay if it gets things wrong a lot, you know, or maybe it's okay if it doesn't work the way that we've said it does or maybe never possibly can.
That's really frustrating to watch as someone who, again, loves technology and loves the possibilities of technology to then see people just sort of using technology to, to deliver things that are, you know, making things worse for people in many ways.

CINDY COHN: Yeah, so I wanna flip it around a little bit. You, like EFF, we kind of sometimes spend a lot of time in all the ways that things are broken, and how do you think about how to get to a place where things are not broken, or how do you even just keep focusing on a better place that we could get to?

MOLLY WHITE: Well, I've, like I said, you know, a lot of my criticism really comes down to the industries and the sort of exploitative practices of a lot of these companies in the tech world. And so to the extent possible, separating myself from those companies and from their control has been really powerful to sort of regain some of that independence that I once remembered the web enabling where, you know, if you had your own website, you know, you could kind of do anything you wanted. And you didn't have to stay within the 280 characters if you had an idea, you know, and you could publish, uh, you know, a video that was longer than 10 minutes long or, or whatever it might be.
So sort of returning to some of those ideals around creating my own spaces on the web where I have that level of creative freedom, and certainly freedom in other ways, has been very powerful. And then finding communities of people who believe in those same things. I've taken a lot of hope in the Fediverse and the communities that have emerged around those types of technologies and projects where, you know, they're saying maybe there is an alternative out there to, you know, highly centralized big tech, social media being what everyone thinks of as the web. Maybe we could create different spaces outside of that walled garden where we all have control over what we do and say, and who we interact with. And we set the terms on which we interact with people.
And sort of push away the, the belief that, you know, a tech company needs to control an algorithm to show you what it is that you want to see, when in reality, maybe you could make those decisions for yourself or choose the algorithm or, you know, design a system for yourself using the technologies that are available to everyone, but have been sort of walled in by a large or many of the large players in the web these days.

CINDY COHN: Thank you, Molly. Thank you very much for coming on and, and spending your time with us. We really appreciate the work that you're doing, um, and, and the way that you're able to boil down some pretty complicated situations into, you know, kind of smart and thoughtful ways of reflecting on them. So thank you.

MOLLY WHITE: Yeah. Thank you.

JASON KELLEY: It was really nice to talk to someone who has that enthusiasm for the internet. You know, I think all of our guests probably do, but when we brought up Neo pets, that excitement was palpable, and I hope we can find a way to get more of that enthusiasm back.
That's one of the things I'm taking away from that conversation was that more people need to be enthusiastic about using the internet and whatever that takes. What did you take away from chatting with Molly that we need to do differently Cindy?

CINDY COHN: Well, I think that the thing that made the enthusiasm pop in her voice was the idea that she could control things. That she was participating and, and participating not only in Neopets, but the participation on Wikipedia as well, right?
That she could be part of trying to make truth available to people and recognizing that truth in some ways isn't an endpoint, it's an evolving conversation among people to try to keep getting at getting it right.
That doesn't mean there isn't any truth, but it does mean that there is an open community of people who are working towards that end. And, you know, you hear that enthusiasm as well. It's, you know, the more you give in, the more you get out of the internet and trying to make that a more common experience of the internet that things aren't just handed to you or taught to you, but really it's a two-way street, that's where the enthusiasm came from for her, and I suspect for a lot of other people.

JASON KELLEY: Yeah, and what you're saying about truth, I think she sort of applies this in a lot of different ways. Even specific technologies, I think most people realize this, but you have to say it again and again, aren't necessarily right or wrong for everything. You know, AI isn't right or wrong for every scenario. It's sort of, things are always evolving. How we use them is evolving. Whether or not something is correct today doesn't mean it will be correct tomorrow. And there's just a sort of nuance and awareness that she had to how these different things work and when they make sense that I hope we can continue to see in more people instead of just a sort of, uh, flat across the board dislike of, you know, quote unquote the internet or quote unquote social media and things like that.

CINDY COHN: Yeah, or the other way around, like whatever it is, there's a hype cycle and it's just hyped over and over again. And that she's really charting a middle ground in the way she writes and talks about these things that I think is really important. I think the other thing I really liked was her framing of decentralization as thinking about decentralizing power, not decentralizing compute, and that difference being something that is often elided or not made clear.
But that can really help us see where, you know, where decentralization is happening in a way that's empowering people, making things better. You have to look for decentralized power, not just decentralized compute. I thought that was a really wise observation.

JASON KELLEY: And I think could be applied to so many other things where a term like decentralized may be used because it's accessible from everywhere or something like that. Right? And it's just, these terms have to be examined. And, and it sort of goes to her point about marketing, you know, you can't necessarily trust the way the newest fad is being described by its purveyors.
You have to really understand what it's doing at the deeper level, and that's the only way you can really determine if it's, if it's really decentralized, if it's really interoperable, if it's really, you know, whatever the new thing is. AI

CINDY COHN: Mm-hmm. Yeah, I think that's right. And you know, luckily for us, we have Molly who digs deep into the details of this for so many technologies, and I think we need to, you know, support and defend, all the people who are doing that. Kind of that kind of careful work for us, because we can't do all of it, you know, we're humans.
But having people who will do that for us in different places who are trusted and who aren't, you know who whose agenda is clear and understandable, that's kind of the best we can hope for. And the more of that we build and support and create spaces for on the, you know, uncontrolled open web as opposed to the controlled tech giants and walled gardens, as she said, I think the better off we'll be.

JASON KELLEY: Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley…

CINDY COHN: And I’m Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons attribution 4.0 international and includes the following music licensed Creative Commons 3.0 unported by its creators: Drops of H2O, the filtered water treatment, by J. Lang. Additional beds by Gaetan Harris.

Podcast Episode: Digital Autonomy for Bodily Autonomy

We all leave digital trails as we navigate the internet – records of what we searched for, what we bought, who we talked to, where we went or want to go in the real world – and those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? What if we reclaimed the right to go, read, see, do and be what we wish online as we try to do offline? Moreover, what if we saw digital autonomy and bodily autonomy as two sides of the same coin – inseparable?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Kate Bertash wants that digital autonomy for all of us, and she pursues it in many different ways – from teaching abortion providers and activists how to protect themselves online, to helping people stymie the myriad surveillance technologies that watch and follow us in our communities. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline. 

In this episode you’ll learn about:

  • Why it’s important for local communities to collaboratively discuss and decide whether and how much they want to be surveilled
  • How the digital era has blurred the bright line between public and private spaces
  • Why we can’t surveil ourselves to safety
  • How DefCon – America's biggest hacker conference – embodies the ideal that we don’t have to simply accept technology as it’s given to us, but instead can break, tinker with, and rebuild it to meet our needs
  • Why building community helps us move beyond hopelessness to build and disseminate technology that helps protects everyone’s privacy  

Kate Bertash works at the intersection of tech, privacy, art, and organizing. She directs the Digital Defense Fund, launched in 2017 to meet the abortion rights and bodily autonomy movements’ increased need for security and technology resources after the 2016 election. This multidisciplinary team of organizers, engineers, designers, abortion fund and practical support volunteers provides digital security evaluations, conducts staff training, maintains a library of go-to resources on reproductive justice and digital privacy, and builds software for abortion access, bodily autonomy, and pro-democracy organizations. Bertash also engages in various multidisciplinary civic tech projects as a project manager, volunteer, activist, and artist; she’s especially interested in ways that artistic methods can interrogate use of AI-driven computer vision, other analytical technologies in surveillance, and related intersections with our civil rights. 

Resources:

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

KATE BERTASH: It is me, having my experience, like walking through these spaces, and so much of that privacy, right, should, like, treat me as if my digital autonomy in this space is as important as my bodily autonomy in the world.
I think it's totally possible. I have such amazing optimism for the idea of reclaiming our digital autonomy and understanding that it is like the you that moves through the world in this way, rather than just some like shoddy facsimile or some, like, shadow of you.


CINDY COHN: That’s Kate Bertash speaking about how the world will be better when we recognize that our digital selves and our physical selves are the same, and that reclaiming our digital autonomy is a necessary part of reclaiming our bodily autonomy. And that’s especially true for the people she focuses on helping, people who are seeking reproductive assistance.
I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I’m Jason Kelley – EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. Now a big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into the action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.

JASON KELLEY: Our guest today is someone who has been tirelessly fighting for the safety and privacy of a very vulnerable group of people for many years – and she does so with compassion, creativity and joy.

CINDY COHN: Kate Bertash is a major force in the world of digital privacy and security. Her work with the Digital Defense Fund started in 2017 as a resource to help serve the digital security needs of people seeking abortions and other reproductive care, and they have \ expanded their purview to include trans rights, elections integrity, harm reduction and other areas that are crucial to an equitable and functional democratic society. She’s also an artist, with a clothing line called Adversarial Fashion. She designs clothes that do all sorts of deliciously sneaky things – like triggering automatic license plate readers, or injecting junk data into invasive state and corporate monitoring systems. We’re just delighted to have her with us today - welcome Kate!

KATE BERTASH: Thank you so much for having me on. What an introduction.

CINDY COHN: Well, let's start with your day job, privacy and reproductive rights. You've been doing this since long before it became, you know, such a national crisis. Tell us about the Digital Defense Fund.

KATE BERTASH: So after Donald Trump was elected in 2016, I had started running some, what I would call tech volunteering events, the most well known of which is the Abortion Access Hackathon in San Francisco, we had about 700 people apply to come and hundreds of people over the weekend who basically were able to help people with very functional requests.
So we went to different organizations in the area and worked to ensure that they could get help with, you know, turning a spreadsheet into a database or getting help working on open source that they use for case management, or fixing something that was broken in their sales force. So, very functional stuff.
And then I was approached after that and asked if I wanted to run this new fund, the Digital Defense Fund. So we spent the first couple years kind of figuring out what the fund was going to do, but sort of organically and learning basically from the people that we serve and the organizations that work at Abortion Access, we now have this model where we can provide hands-on, totally free digital security and privacy support to organizations working in the field.
We provide everything from digital security evaluations to trainings. We do a lot of project management, connecting folks with different kinds of vendor software, community support, a lot of professional development.
And I think probably the best part is we also get to help them fund those improvements. So I know we always talk a lot about how things can improve, but I think kind of seeing it through, uh, and getting to watch people actually, you know, install things and turn them on and learn how to be their own experts has been a really incredible experience. So I can't believe that was eight years ago.

JASON KELLEY: You know a lot has changed in eight years, we had the Dobbs decision, um, that happened under the Biden administration, and now we've got the Dobbs decision, under a Trump administration. I assume that, you know, your work has changed a lot. Like at EFF we've been doing some work, with the Repro Uncensored Coalition tracking the changes in take downs of abortion related content. And that is a hard thing to do just for, you know, all the reasons, um, that it, you know, tracking what systems take down is sort of a thing you have to do one at a time and just put the data together. But for you, I mean, out of eight years, you know what's different now than, than maybe not 2017 or, but, but certainly, you know, 2022.

KATE BERTASH: I think this is a really excellent question just because I think it's kind of strange to look backwards and, and know that, uh, abortion access is a really interesting space in that for decades it's been under various kinds of different legal, and I would say ideological attacks as well as, you know, dealing with the kind of common problems of nonprofits, usually funding, often being targets of financial scams and crime as all nonprofits are.
But I think the biggest change has been that, um, a lot of folks who I think sort of. Could always lean on the idea that abortion would be federally legal, and so your job may be helping people get their abortions or performing abortions or supporting folks with funding to get to their procedures, that that always sort of had this like, color of law that would always kind of back you up or provide for you a certain level of security.
Um, now we kind of don't have that safety, mentally, even to lean on anymore as well as legally. And so a lot of the meat and potatoes of the work that we do, um, often it was always about, you know, ensuring patient privacy. But a lot of times now it's also ensuring that organizations are kind of ready to ask and answer kind of hard questions about how they wanna work. What data is at risk when so much is uncertain in a legal space?
Because I think, you know, I hardly have to tell anybody at EFF that, often, uh, we kind of don't know what, what quote unquote qualifies or what is legal under a particular new law or statute until somebody makes you prove it in court.
And I think a lot of our job at Digital Defense Fund really then crystallized into what we can do to help people sort of tolerate this level of uncertainty and ensure that your tools and that your tactics and your understanding even of the environment that you're operating in at least buoys you and is a source of certainty and safety when the world cannot be.

CINDY COHN: Oh, I think that's great. Do you have a, an example?

KATE BERTASH: Yes, absolutely. I think one of the biggest changes that I've seen in how people tend to work and operate is that, uh, I think you know, this kind of backs into many other topics that I know get discussed on this podcast, which is that when we reach into our pocket for the computer that is on us all day, you know, our phone and we reach out to text people, it's, it's a very accessible way to reach somebody and trying to really wrap around the understanding of the difference between sending an SMS text message to somebody, or responding to a text message asking about services that your organization provides or where to get an abortion or something like that, and the difference of how much information is kept, for example, by your cell phone carrier. Usually, you know, as all of you have taught all of us very well, uh, in plain text as far as we know forever.
Uh, and the absolute huge difference then of getting to really inform people about this sort of static understanding of our environment that we operate in, that we kind of take for granted every day, when we're just like texting our friends or, you know, getting a message about whether something's ready for pickup at the pharmacy. Uh, and then instead we get to help move people onto other tools, encrypted chat like Signal or Wire or whatever meets their needs, helping meet people where they're at on other platforms like WhatsApp, and to really not just like tell people these are the quote unquote correct tools to use, because certainly there are many great, uh, you know, all loads roads lead to Rome as they say.
But I think getting to improve people's sort of environmental understanding of the ocean that we're all swimming in, uh, that it actually doesn't have to work this way, but that these are also the results of systems that, are motivated by capital and how you make money off of data. And so I think trying to help people to be prepared then to make different decisions when they encounter new questions or new technologies has been a really, really big piece of it. And I love that it gets to start with something as simple as, you know, a safer place to have a sensitive conversation in a text message on your phone in a place like Signal. So, yeah.

CINDY COHN: Yeah, no, I think that makes such sense. And we've seen this, right? I mean, you know, we had a mother in Nebraska who went to jail because she used Facebook to communicate with her daughter, I believe about getting reproductive help. And the shifting to a just a different platform might've changed that story quite a bit because, you know, Facebook had this information and, you know, one of the things that, you know, we know as lawyers is that like when Facebook gets a subpoena or process asking for information about a user, the government doesn't have to tell them what the prosecution is for, right? So that, you know, it could be a bank robber or it could be a person seeking reproductive help. The company is not in a position to really know that. Now we've worked in a couple places to create situations in which if the company does happen to know for some reason they can resist.
But the way that the baseline legal system works means that we can't just, you know, uh, as much as I love to blame Facebook, we can't blame Facebook for this. We need actual tools that help protect people from the jump.

KATE BERTASH: Absolutely, and I think that case is a really important example of, especially I think, how unclear it is from platform to platform, sort of how that information is kept and used.
I think one of the really tragic things about that conversation was that it was a very loving conversation. It was the kind of experience I think that you would want to have between a parent and child to be able to be there for each other. And they were even to talking to each other while they were in the same house. So they were just sharing a conversation from one room to the next. And that's something that I think like, to see the reaction the public had to, that I think, was very affirming to me that, that it was wrong, uh, that, you know, that just the way that this platform is structured somehow then, uh, put this extra amount of risk on this family.
I think, because, you know, we can imagine that it should be a common experience or common right to just have a simple conversation within your household and to know that like that's in a safe place, that that's treated with the sensitivity that it deserves. And I think it helps us to understand that. You know, we are actually, and I mean this in a good sense of the word, entitled to that, and I know that seeing actually, uh, Meta respond to the sort of outcry, there was also a very, like, positive flag for me, because they don't typically respond to, uh, their, their comms department does not typically respond to any individual subpoena that they received, but they felt they had to come out and say why they responded and what the, the problem was there. Um, I think as sort of an indication that this is important.
These different kinds of cases that come up, especially around abortion and criminalization, one of the reasons I think they're so important for us to cover is that, you know, on this podcast or within the spaces that both you and I work with so much about digital security and privacy kind of exists in this very like cloudy, theoretical space.
Like we have these, like, ideals of what we know we want to be true and, and often, you know, when you, when you're talking to folks about like big data, it's literally so large that it can be hard to like pin it down and decide how you feel. But these cases, they provide these concrete examples of how you think the world actually should or should not work.
And it really nails it down and lets people form these very strong emotional responses to it. Um, that's why I'm so grateful that, um, you know, organizations like yours get to help us contextualize that like, yes, there's this like, really personal, uh, and, and tragic story – and it also takes place within this larger conversation around your digital civil liberties.

CINDY COHN: Yeah, so let's flip that around a little bit. I've heard you talk about this before, which is, what would the world look like if our technologies actually stood up for us in these contexts? And, you know, inside the home is a very particular one. And I think because the Fourth Amendment is really clear about the need for privacy. It's one of the places where privacy is actually in our constitution, but I think we're having a broader conversation, like what would the world look like if the tools protected us in these times?

KATE BERTASH: I think especially, it's really interesting to think about the, the problems that I know I've learned so much from your team around the, the problem of what is public and what is private. I think, you know, we always talk about abortion access as a right to privacy and then it suddenly exists in this space where we kind of really haven't decided what that means, and especially anything that's very fuzzy about that.
People are often very familiar with the image of the protestor outside of the abortion clinic. There are many of the same problems kind of wrapped up in the fact that protestors will often film or take photographs or write down the license plates of people who are going in and out of clinics, often for a variety of reasons, but mostly to surveil them in in some way that we actually see then from state actors or from corporations, this is done on a very personal basis.
And it has a lot of that same level of damage. And we frequently have had to capitulate that like, well, this is a public space, you know, people can take photos in, in a public area, and that information that is taken about your personal abortion experience is unfortunately, you know, can be used and, and misused in, in whatever way people want.
And then we watched that exact same problem map itself onto the online space. So yeah, very important to me.

CINDY COHN: I think this is one of the fundamental, things that the digital era brought us was an increasing recognition that this bright line between public spaces and private spaces isn't working.
And so we need a more, you know, it's not like there aren't public spaces online. I definitely want reporters to be able to, you know. Do investigations that give us information about people in power and, and what they're doing. Um, so it's not, it's, it's not either or, right, and I think that's the thing we have to have a more nuanced conversation about what public spaces. Are really not public in the context. You know, what we think of as Bright Line public spaces aren't really rightfully treated as public. And I love your reframing about this as being about us. It's about us and our lives.

KATE BERTASH: Absolutely. Uh, I think one of the larger kind of examples that has come up also as well, uh, is that your experience of seeking out medical care actually then travels into the domain of, of the doctor that you see they often use in electronic health records system. And so you have this record of something that I don't think any of these companies were really quite adequately prepared for, for the policy eventuality that they would be holding information that would be an enshrined human right in some states’ constitutions, but a crime in a different state. And you know, you have these products like Epic Everywhere, and they allow access to that same information from a variety of places, including from a state where, you know, that, to that state, it is evidence of a crime to have this in the health record versus just it's, you know, a normal continuity of care in a different state.
And kind of seeing how, you know, we tend to have these sort of debates and understandings and trying to, like you say, examine the nuance and get to the bottom of how we wanna live in these different contexts of policy or in court cases. But then so much of it is held in this corporate space and I think they really are not. Ready for the fact that they are going to have to take a much more active role, I think, than they even want to, uh, in understanding how that shows up for us.

JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You’re the reason that we exist. You can become a member if you’re not for just $25 and for a little more you can get some great, very stylish gear. The more members we have, the more power we have - in statehouses, courthouses, and on the streets. EFF has been fighting for digital rights for decades, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate.
And now back to our conversation with Kate Bertash.
So we've been talking a lot about the skills and wisdom that you've learned during the fight for reproductive rights, but I know a lot of that can be used in other areas as well. And I heard recently that you live in a pretty small rural town, and not all your neighbours share your political views. But you've been building sort of a local movement to fight surveillance there – and I’d love to hear about how you are bringing together different people with different sort of political alignments to come together on this privacy issue.

KATE BERTASH: Yeah, it actually had started so many years ago with Dave Moss, who's on the EFF team and I having a conversation about the license plate surveillance actually at clinics and, and kind of how that's affected by the proliferation of automated license plate reader technology. And I had come up with this, this like line of clothing called Adversarial Fashion, which, uh, injects junk into automated license plate readers.
It was a really fun project. I was really happy to see the public response to it, but as a result, I sort of learned a lot about these systems and kind of became a bit of an activist on the privacy issues around them.
And then suddenly, I now live in a rural community in southwest Washington and I then suddenly found out on Facebook one day that our sheriff's department had purchased Flock automated license plate reader cameras, and just installed them already and just announced it. Like there was no public discussion, no debate, no nothing. There had been debate in neighboring counties where they decided, oh, kind of not for us. You know, where a lot of rural communities, uh, and, and like, I wanna give you a sense of the size. Our county has 12,000 people in it. My town has a thousand people in it. So very tiny, like, you kind of almost wonder why you would even need license plate for surveillance when you could just like literally ask almost anybody what's going on with, like, I've seen people before on, on Facebook where they're like, Hey, is this your car? You know, somebody stole it. Come pick it up. It's on our, on our hill.

CINDY COHN: I grew up in a very small town in Iowa and the saying in our town was, you know, you don't need turn signals 'cause everybody knows where you're going.

KATE BERTASH: I love that. See exactly like I did not know that about you, Cindy. I love that. And that was kind of this initiating, uh, event where I was just, I, I'll be honest with you, I totally hit the ceiling. What I found out I was, I was really mad because, you know, you are active on all this stuff outside of, you know, your work and your, you know, I've been all over the country talking about the problems with this technology and the privacy issues that it raises and you know, how tech companies take advantage of communities and here they were taking advantage of my community.
It's like, not in my house! How is it in my house?

JASON KELLEY: Well, when did this happen? When? When did they install these?

KATE BERTASH: Oh my gosh, it had to be a couple of months ago. I mean, it was very, very recently. Yeah, it was super recently, and so I kind of did what I know best, which is that I took everything that I learned, I put it into a presentation to my neighbors. I scheduled a bunch of nights at the different libraries and community centers in my county, and invited everybody to come, and the sheriff and the undersheriff came too.
And the most surprising thing about this was that I think, A, that people showed up. I was actually very pleasantly surprised. I think a lot of people, when they move to rural areas, they do so because, you know, they want to feel freer to be not, you know, watched every day by the state or by corporations, or even by their neighbors, frankly.
And so it was really surprising to me when, this is probably the most politically diverse room I've ever presented to. And definitely people that I think would absolutely not love any of my rest of my politics, but both nights, one hundred percent of the room was in agreement that they did not like these cameras, did not think that they were a good fit for our community, that they don't really appreciate, you know, not being asked.
I think that was kind of the core thing we wanted to get through is that even if you do decide these are a good fit. We should have been asked first, and I got people, shaking my hands afterwards. We're like, thank you young lady for bringing up this important issue.
Um, it's still ongoing. We haven't had all of them. Some of them have been removed, uh, but not all of them. And I think there's a lot closer scrutiny now on like the disclosure page that Flock puts up where you get to see kind of how the data is accessed. Uh, but I think it was like, you know, I've been doing this like privacy and safety work for a while, but it made me realize I still have room to be surprised, and I think that like I was surprised that everybody in my community was very united on privacy. It might be the thing on which we most agree, and that was like so heartwarming in such a way. I really can't wait to keep, keep building on that and using it as a way to connect with people.

CINDY COHN: So I'd like to follow up because we've been working hard to try to figure out how to convince people that you can't surveil yourself to safety, right? This stuff is always promoted as if it's going to make us safe. What stories did you hear that were resonating with people? What was the counter story from, you know, surveillance equals safety.

KATE BERTASH: I think the biggest story that I knew really connected with folks was actually the way in which that data was shared outside of our community. And there was somebody who was sitting in the room who I think had elaborated to that point that she said. I might like you as the sheriff, you know, these are all people who voted for the sheriff. We got to actually have this conversation face to face, which was really quite amazing. And they got to say to the sheriff, I voted for you. I might like you just fine. I might think you would be responsible logging into this stuff, but I don't know all those people who these platforms share this stuff with.
And Flock actually shares your data, unless you specifically request that they turn it off, and I think that was where they were like, you know, I don't trust those people, I don't know those people.
I also don't know your successor. Who's gonna get this? If we give this power to this office, I might not trust the future sheriff as much. And in a small town, like, that personal relationship matters a lot. And I think it was like really helpful to kind of take it out of this, you know, I am obviously very concerned about the ways in which they're, you know, abusive of policing technology and power. I think though, because like so many of these people are people who are your neighbors and you know them, it was so helpful to kind of put it in terms of like, you know, I don't want you to think it's about whether or not I trust your confidence personally.
It's about rather what we maybe owe each other. And you know, I wish you had asked me first, and it became a very like, powerful personal experience and a personal narrative. And, and I think even at the end of the night, like by the second night, I think the sheriff's department had really changed their tune a lot.
And I said to them, I was like, this is the longest we've ever gotten to talk to each other. And I think that's a great thing.

CINDY COHN: I think that's really great. And what I love about this is landing, it really, you know, community has come up over and over again in the way that we've talked to different people about what's important about making technology serve people.

KATE BERTASH: Yeah, people make these decisions very emotionally. And I think it was really nice to be able to talk about trust and relationships and communication because so much of the conversation when it's just held online, gets pulled into, I think everybody in this room our least favorite phrase. If you're not doing anything wrong, why do you care about being surveilled?
And it's just sort of like, well, it's not about whether or not I'm committing a crime. It's about whether or not, you know, we've had a discussion about what we should all know about each other, or like, why don't you just come over and ask me first.
I still want our community to have the ability to get people’s stolen cars back or to like find somebody who is like a, a lost senior adult or, or a child who's been abducted, you know? But these are like problems. Then we get to solve together rather than in this like adversarial manner where everybody's an obstacle to some public good.

JASON KELLEY: One of the things that I think a lot of the people we talk with, but I think you in particular are bringing to this conversation is, I don't know, optimism, joy, creativity.
You're someone who is dealing with some complicated, difficult, often depressing stuff. And you think about how to get people involved in ways that aren't, you know, uh, using the word dystopia, which is a word we use too much at e fff because it's too often becoming true. Cindy, I think mentioned earlier the adversarial fashion line. I think you've done a lot of work in getting people who aren't necessarily engineers thinking about like data issues clearly.
Tell us a little bit about the adversarial fashion work and also just, you know, how we get more people involved in protecting privacy that aren't necessarily the ones working at Facebook, right?

KATE BERTASH: So one of the most fun things about the adversarial fashion line, uh, was in, in kind of researching how I was gonna do that. The reason I did it is because I actually spent some of my free time designing fabrics, like mostly stuff with little, you know, manatees or cats on them, like silly things for kids.
And so I was like, yeah, it's, it's a surface pattern. I could definitely do that. Seems easy. Uh, and I got to research and find out more about sort of the role that art has in a lot of anti-surveillance movements. There's a lot of really cool anti surveillance art projects. Uh, it has been amazing as I present adversarial fashion, uh, in different places to kinda show off how that works.
So the way that the adversarial fashion line works is that these clothes have basically, you know, see these sort of iterations of what kind of look like plates on them. And automated license plate readers are kind of interesting in that they're, what I guess the system with low specificity is, is the way that a software engineer might term it, which is that they are working on a highway at, you know, 60, 70 miles an hour.
They're ingesting hundreds, sometimes thousands of plates a minute. So they really have to just be generous in what they're willing to ingest. So they, they put the vacuum and things like picket fences and billboards. And so clothing was kind of trivial, frankly, to get them to pick that up as well.
And what was really nice about the example of, you know, like a shirt that. You know, could be read as a car by some of these systems. And it was very easy to show, especially on some of the open source systems that are the exact same models deployed in surveillance technology that's bought and sold, uh, that, you know, you would really think differently than about your plate being seen someplace as sort of something that might implicate you in a crime or determine a pattern of behavior or justify somebody surveilling you further if it can be fooled by a t-shirt.
And you know, much like the example we talked about, uh, with, you know, conversations being held on a place like Facebook, anti surveillance artworks are cool in that they get to help people who feel like they're not technical enough or they don't really understand the underlying pieces of technology to have a concrete example that they can form a really strong reaction to. I know that some of the people who had really thrilled me that they were very excited about were like criminal defense attorneys reached out and asked a bunch of questions.
We have a lot of other people who are artists or designers who are like, how did you learn to use these systems? Did you need to know how to code? And I'm like, no, you can just roll them up on, you know, there's actually a bunch of a LPR apps that are available on, you know, the Apple store or that you can use on your computer, on your phone and test out the things that you've made.
And this actually works for many other systems. So, you know, facial recognition systems, if you wanna play around and come up with really great, you know. Clothing or masks or makeup or something, you can actually test it with the facial recognition piece of Instagram or any of these different types of applications.
It's a lot of fun. I love getting to answer people's questions. I love seeing the kind of creative spark that they're like, oh yeah, maybe I am smart enough to understand this, or to try and fool it on my own. Or know that like these systems aren't maybe as complex or smart as I give them credit for.

JASON KELLEY: What I like about this especially is that you are, you know, pointing out that this stuff is actually not that complicated and we've moved into a world where often the kind of digital spaces we live in, the technology we use feels so opaque that people can't even understand how to begin to like modify it, or to understand how it works or how they would build it themselves.
It's something we've been talking about with other people about how there's sort of like a moment where you realize that you can modify the digital world or that you can. You, you know how it works. Was there a moment in your work or in your life, um, where you realized that you could sort of understand that technology was there FOR you, not just there like to be thrust upon you?

KATE BERTASH: You know, it might be a little bit late in my life, but I think when I first got this job and I was like, oh my gosh, what am I going to do to really help kind of break through the many types of like privacy and safety problems that are facing this community, somebody had said, Kate, you should go to Def Con, and I went to Def Con, my very first one, and I was like blown back in my chair.
Defcon is America's largest hacker conference. It takes place every single year in Las Vegas and I think going there, you see, not only are these presentations on things that people have broken, but then there are places called villages that you walk through and people show you how to break systems or why, actually, it should be a lot harder to break this than it is.
Like the voting village. They buy old voting machines off of eBay and then, you know, teach everyone who walks in within, you know, 20 minutes how you can break into a voting machine. And it was just this, like, moment where I realized that you don't have to take technology as it is given to you. We all deserve technology that has our back and, and can't be modified or broken to hurt us.
And you can do that by yourself, sort of like actively tinkering on it. And I think that spirit of irreverence Really carried through to a lot of the work that we do with Digital Defense Fund, where we get people all the time who, like, they come in and they are worried about absolutely everything. It's so hard to decide what bite of the elephant to take first on, you know, improving the safety and privacy for the team and how they work and the patients that they serve.
But then we get to kind of show people some great examples of how actually this. Isn't quite as complicated as you might think. I'm gonna walk you through sort of the difference of like getting to use, like, one app text versus another, or turning on two factor.
We love tools like have I been pwned because they kind of help shape that understanding. You know, like you think about how a hacker gets a password, it feels so abstract or like technical, and then you realize, oh, actually when somebody breaks these, they buy and sell them, and then somebody just takes old passwords and reuses them.
That seems far more intuitive. I can now understand the ecosystem and the logic that's used behind so much of security and it builds on itself. And I think the thing that I'm most proud of is that we not only have this community of folks that we've worked with to improve their safety that we introduced to personal, you know, professional development opportunities to keep growing that understanding. We also manage an amazing community of technologists who build their own systems.
There's one group called the DC Abortion Fund who built their own case management platform because they were not being served by any of these corporate or enterprise options that charge way too much. They have like, you know, dozens of case managers, so that many seats was never gonna be affordable. And so they just sat down and they, you know, worked with Code For DC and they built it out, hand in hand.
And that is a project that I always point to as like, you know, it took somebody saying to themselves, I deserve better than this, and I can learn from everything I like about, you know, systems that you can buy and sell, but also like our community's gonna build what we need.
And to be supported to do that and have that encouragement is, is one of the reasons that I'm so proud that, um, over these years, the number of sort of self-built and community built software projects and other types of like ways that people deploy more secure technology to each other and teach each other has grown by leaps and bounds.
My job is so different now than what it was eight years ago because people are hungry for it. They know that they are, you know, ready to become their own experts in their communities. And the requests that we get then for, for more train the trainer type of material, or to help equip people to bring this back to their space the way, you know, I brought my ALPR presentation back to my own community. It's great to see that everyone is so much more encouraged, especially in these times when like systems are unstable, nonprofits spin up and down. We all have funding problems that have very little often to do with the demand for those resources, that that's not the end of the story.
So, yeah, I love it. It's been a wonderful journey, seeing how everything has changed from, like you said, that spirit of, of being always worried that things are getting worse, focusing on this dystopia, to seeing sort of, you know, how our own community has expanded its imagination. It's really wonderful. //

CINDY COHN: What a joy it is to talk to someone like Kate. She brings this spirit of irreverence that I think is great that she centers on Defcon because that's a community that definitely takes security seriously, but don't take themselves very seriously. So I really, I love that attitude and how important that is, I hear, for building community, building resilience through what are pretty dark times for the community that she fundamentally, you know, works with.

JASON KELLEY: And building that understanding that you have the not just ability, but like the right to work with the technology that is presented to you and to understand it and to take it apart and to rebuild it. All of that is, I think, critical to, you know, building the better internet that we want.
And Kate really shows how just, you know, going to the DEF Con Village can change your whole mind about that sort of thing, and hopefully people who don't have technical skills will recognize that you actually don't necessarily need them to do what she's describing. That's another thing that she said that I really liked, which is this, that, you know, she could show up in a room and talk to 40 people about surveillance and she doesn't have to talk about it at a, you know, technical level really, just saying, Hey, here's how this works. Did you know that? And anyone can do that. You know, you just have to show up.

CINDY COHN: Yeah. And how important these, like hyperlocal conversations are to really getting a handle on combating this idea that we can surveil ourselves to safety. What I really loved about that story, about gathering her community together, including the sheriff, is that, you know, they actually had a real conversation about the impact of what the sheriff was, was, is doing with Alps and really were able to be like, you know, look, I want you to be able to catch people who are stealing cars, but also there are these other ramifications and really bringing it down to a human level as one of the ways we get people to kind of stop thinking that we can surveil ourselves to safety. Then that technology can just replace the kind of individual community-based conversations we need to have.

JASON KELLEY: Yeah. She really is maybe one of the best people I've ever spoken to at bringing it down to that human level.

CINDY COHN: I think of people like Kate as the connective tissue between the communities that really need technologies that serve them, and the people who either develop those technologies or think about them or advocacy groups like us who are kind of doing the policy level work or the national level or even international level work on this.
We need those, those bridges between the communities that need technologies and the people who really think about it in the kind of broader perspective or develop it and deploy it.

JASON KELLEY: I think the thing that I'm gonna take away from this most is again, just Kate's creativity and the fact that she's so optimistic and this is such a difficult topic and, and we're living in such, you know, easily described as dystopic times. Um, but, uh, she's sort of alive with the idea that it doesn't have to be that way, which is really the, the whole point of the podcast. So she embodied it really well.

CINDY COHN: Yep. And this season we're gonna be really featuring the technologies of freedom, the technologies we need in these particular times.
And Kate is just one example of so many people who are really bright spots here and pointing the way to, you know, how we can fix the internet and build ourselves a better future.

JASON KELLEY: Thanks for joining us for this episode – and this new season! – of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley.

CINDY COHN: And I’m Cindy Cohn.

EFF Leads Prominent Security Experts in Urging Trump Administration to Leave Chris Krebs Alone

Par : Josh Richman
28 avril 2025 à 14:11
Political Retribution for Telling the Truth Weakens the Entire Infosec Community and Threatens Our Democracy; Letter Remains Open for Further Sign-Ons

SAN FRANCISCO – The Trump Administration must cease its politically motivated investigation of former U.S. Cybersecurity and Infrastructure Security Agency Director Christopher Krebs, the Electronic Frontier Foundation (EFF) and dozens hundreds (see update below) of prominent cybersecurity and election security experts urged in an open letter. 

The letter – signed by preeminent names from academia, civil society, and the private sector – notes that security researchers play a vital role in protecting our democracy, securing our elections, and building, testing, and safeguarding government infrastructure. 

“By placing Krebs and SentinelOne in the crosshairs, the President is signaling that cybersecurity professionals whose findings do not align with his narrative risk having their businesses and livelihoods subjected to spurious and retaliatory targeting, the same bullying tactic he has recently used against law firms,” EFF’s letter said. “As members of the cybersecurity profession and information security community, we counter with a strong stand in defense of our professional obligation to report truthful findings, even – and especially – when they do not fit the playbook of the powerful. And we stand with Chris Krebs for doing just that.” 

President Trump appointed Krebs as Director of the Cybersecurity and Infrastructure Security Agency in the U.S. Department of Homeland Security in November 2018, and then fired him in November 2020 after Krebs publicly contradicted Trump's false claims of widespread fraud in the 2020 presidential election. 

Trump issued a presidential memorandum on April 9 directing Attorney General Pam Bondi and Homeland Security Secretary Kristi Noem to investigate Krebs, and directing Bondi and Director of National Intelligence Tulsi Gabbard to revoke security clearances held by Krebs and the cybersecurity company for which he worked, SentinelOne.  EFF’s letter urges that both of these actions be reversed immediately. 

“An independent infosec community is fundamental to protecting our democracy, and to the profession itself,” EFF’s letter said. “It is only by allowing us to do our jobs and report truthfully on systems in an impartial and factual way without fear of political retribution that we can hope to secure those systems. We take this responsibility upon ourselves with the collective knowledge that if any one of us is targeted for our work hardening these systems, then we all can be. We must not let that happen. And united, we will not let that happen.” 

EFF also has filed friend-of-the-court briefs supporting four law firms targeted for retribution in Trump’s unconstitutional executive orders. 

For the letter in support of Krebs: https://www.eff.org/document/chris-krebs-support-letter-april-28-2025

To sign onto the letter: https://eff.org/r.uq1r 

Update 04/29/2025: The letter now has over 400 signatures. You can view it here: https://www.eff.org/ChrisKrebsLetter

Contact: 
William
Budington
Senior Staff Technologist

Judge Rejects Government’s Attempt to Dismiss EFF Lawsuit Against OPM, DOGE, and Musk

Par : Josh Richman
3 avril 2025 à 13:15
Court Confirms That, If Proven, DOGE’s Ongoing Access to Personnel Records Is Illegal

NEW YORK—A lawsuit seeking to stop the U.S. Office of Personnel Management (OPM) from disclosing tens of millions of Americans’ private, sensitive information to Elon Musk’s “Department of Government Efficiency” (DOGE) can continue, a federal judge ruled Thursday. 

Judge Denise L. Cote of the U.S. District Court for the Southern District of New York partially rejected the defendants’ motion to dismiss the lawsuit, which was filed Feb. 11 on behalf of two labor unions and individual current and former government workers across the country. This decision is a victory: The court agreed that the claims that OPM illegally disclosed highly personal records of millions of people to DOGE agents can move forward with the goal of stopping that ongoing disclosure and requiring that any shared information be returned. 

Cote ruled current and former federal employees "may pursue their request for injunctive relief under the APA [Administrative Procedure Act]. ...  The defendants’ Kafkaesque argument to the contrary would deprive the plaintiffs of any recourse under the law." 

"The complaint plausibly alleges that actions by OPM were not representative of its ordinary day-to-day operations but were, in sharp contrast to its normal procedures, illegal, rushed, and dangerous,” the judge wrote.  

The Court added: “The complaint adequately pleads that the DOGE Defendants 'plainly and openly crossed a congressionally drawn line in the sand.'" 

OPM maintains databases of highly sensitive personal information about tens of millions of federal employees, retirees, and job applicants. The lawsuit by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to DOGE agents in violation of the federal Privacy Act of 1974, a watershed anti-surveillance statute that prevents the federal government from abusing our personal information. 

The lawsuit’s union plaintiffs are the American Federation of Government Employees AFL-CIO and the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO. 

“Today’s legal victory sends a crystal-clear message: Americans’ private data stored with the government isn't the personal playground of unelected billionaires,” said AFGE National President Everett Kelley. “Elon Musk and his DOGE cronies have no business rifling through sensitive data stored at OPM, period. AFGE and our allies fought back – and won – because we will not compromise when it comes to protecting the privacy and security of our members and the American people they proudly serve.” 

As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs;  nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records.  

OPM holds these records for tens of millions of Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure.  

With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President. The judge ruled that the request for an injunction under the Privacy Act claims can go forward under the Administrative Procedures Act, but not directly under the Privacy Act.  

For the order denying the motion to dismiss: https://www.eff.org/document/afge-v-opm-opinion-and-order-motion-dismiss 

For the complaint: https://www.eff.org/document/afge-v-opm-complaint 

For more about the case: https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management 

Contacts 

Electronic Frontier Foundation: press@eff.org 

Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com 

Vote for “How to Fix the Internet” in the Webby Awards People's Voice Competition!

Par : Josh Richman
1 avril 2025 à 14:51

EFF’s “How to Fix the Internet” podcast is a nominee in the Webby Awards 29th Annual People's Voice competition – and we need your support to bring the trophy home!

Vote now!

We keep hearing all these dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future. 

That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges.    

Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment.   

We’ve just recorded our first interview for Season 6, and episodes should start dropping next month! Meanwhile, you can catch up on our past seasons to become deeply informed on vital technology issues and join the movement working to build a better technological future.  

 And if you’ve liked what you’ve heard, please throw us a vote in the Webbys competition!  

Vote now!

Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible. 

Click below to listen to the show now, or choose your podcast player:

play
Privacy info. This embed will serve content from simplecast.com

Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

Or get our YouTube playlist! Or, listen to the episodes on the Internet Archive!

Podcast Episode Rerelease: Dr. Seuss Warned Us

Par : Josh Richman
23 mars 2025 à 12:42

This episode was first released on May 2, 2023.

We’re excited to announce that we’re working on a new season of How to Fix the Internet, coming in the next few months! But today we want to lift up an earlier episode that has particular significance right now. In 2023, we spoke with our friend Alvaro Bedoya, who was appointed as a Commissioner for the Federal Trade Commission in 2022. In our conversation, we talked about his work there, about why we need to be wary of workplace surveillance, and why it’s so important for everyone that we strengthen our privacy laws. We even talked about Dr. Seuss!

Last week the Trump administration attempted to terminate Alvaro, along with another FTC commissioner, even though Alvaro's appointment doesn't expire until 2029. The law is clear: The president does not have the power to fire FTC commissioners at will. The FTC’s focus on protecting privacy has been particularly important over the last five years; with Alvaro's firing, the Trump Administration has stepped far away from that needed focus to protect all of us as users of digital technologies.

We hope you’ll take some time to listen to this May 2023 conversation with Alvaro about the better digital world he’s been trying to build through his work at the FTC and his previous work as the founding director of the Center on Privacy & Technology at Georgetown University Law Center.

Dr. Seuss wrote a story about a Hawtch-Hawtcher Bee-Watcher whose job it is to watch his town’s one lazy bee, because “a bee that is watched will work harder, you see.” But that doesn’t seem to work, so another Hawtch-Hawtcher is assigned to watch the first, and then another to watch the second... until the whole town is watching each other watch a bee.

play
Privacy info. This embed will serve content from simplecast.com

 Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

To Federal Trade Commissioner Alvaro Bedoya, the story—which long predates the internet—is a great metaphor for why we must be wary of workplace surveillance, and why we need to strengthen our privacy laws. Bedoya has made a career of studying privacy, trust, and competition, and wishes for a world in which we can do, see, and read what we want, living our lives without being held back by our identity, income, faith, or any other attribute. In that world, all our interactions with technology —from social media to job or mortgage applications—are on a level playing field. 

Bedoya speaks with EFF’s Cindy Cohn and Jason Kelley about how fixing the internet should allow all people to live their lives with dignity, pride, and purpose.

In this episode, you’ll learn about: 

  • The nuances of work that “bossware,” employee surveillance technology, can’t catch. 
  • Why the Health Insurance Portability Accountability Act (HIPAA) isn’t the privacy panacea you might think it is. 
  • Making sure that one-size-fits-all privacy rules don’t backfire against new entrants and small competitors. 
  • How antitrust fundamentally is about small competitors and working people, like laborers and farmers, deserving fairness in our economy. 

Alvaro Bedoya was nominated by President Joe Biden, confirmed by the U.S. Senate, and sworn in May 16, 2022 as a Commissioner of the Federal Trade Commission; his term expires in 2029. Bedoya was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. He has been influential in research and policy at the intersection of privacy and civil rights, and co-authored a 2016 report on the use of facial recognition by law enforcement and the risks that it poses. He previously served as the first Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law after its founding in 2011, and as Chief Counsel to former U.S. Sen. Al Franken (D-MN); earlier, he was an associate at the law firm WilmerHale. A naturalized immigrant born in Peru and raised in upstate New York, Bedoya previously co-founded the Esperanza Education Fund, a college scholarship for immigrant students in the District of Columbia, Maryland, and Virginia. He also served on the Board of Directors of the Hispanic Bar Association of the District of Columbia. He graduated summa cum laude from Harvard College and holds a J.D. from Yale Law School, where he served on the Yale Law Journal and received the Paul & Daisy Soros Fellowship for New Americans.

Transcript

ALVARO BEDOYA
One of my favorite Dr. Seuss stories is about this town called Hawtch Hawtch. So, in the town of Hawtch Hawtch, there's a town bee and you know, they presumably make honey, but the Hawtch Hawtcher one day realize that the bee that is watched will work harder you see? And so they hire a Hawtch Hawtcher to be on bee watching watch, but then you know, the bee isn't really doing much more than it normally is doing. And so they think, oh, well, the Hawtch Hawtcher is not watching hard enough. And so they hire another hot hocher to be on bee watcher watcher watch, I think is what Dr. Seuss calls it. And so there's this wonderful drawing of 12 Hawtch Hawtchers, you know, each one and either watching, watching watch, or actually, you know, the first one's watching the bee and, and the whole thing is just completely absurd.

CINDY COHN
That’s FTC Commissioner Alvaro Bedoya describing his favorite Dr. Seuss story – which he says works perfectly as a metaphor for why we need to be wary of workplace surveillance, and strengthen our privacy laws.

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley. EFF’s Associate Director of Digital Strategy. This is our podcast, How to Fix the Internet.

Our guest today is Alvaro Bedoya. He’s served as a commissioner for the Federal Trade Commission since May of 2022, and before that he was the founding director of the Center on Privacy & Technology at Georgetown University Law Center, where he was also a visiting professor of law. So he thinks a lot about many of the issues we’re also passionate about at EFF – trust, privacy, competition, for example – and about how these issues are all deeply intertwined

CINDY COHN
We decided to start with our favorite question: What does the world look like if we get this stuff right?

ALVARO BEDOYA
For me, I think it is a world where you wake up in the morning, live your life and your ability to do what you want to do. See what you wanna see. Read what you wanna read and live the life that you want to live is unconnected to who you are in a good way.

In other words, what you look like, what side of the tracks you're from, how much money you have. Your gender, your gender identity, your sexuality, your religious beliefs, that those things don't hold you down in any way, and that you can love those things and have those things be a part of your life. But that they only empower you and help you. I think it's also a world… we see the great parts of technology. You know, one of the annoying things of having worked in privacy for so long is that you're often in this position where you have to talk about how technology hurts people. Technology can be amazing, right?

Mysterious, wonderful, uh, empowering. And so I think this is a world where those interactions are defined by those positive aspects of technology. And so for me, when I think about where those things go wrong, sorry, falling into old tropes here, but thinking about it positively, increasingly, people are applying for jobs online. They're applying for mortgages online. They are doing all these capital letter decisions that are now mediated by technology.

And so this world is also a world where, again, you are treated fairly in those decisions and you don't have to think twice about, hold on a second, I just applied for a loan. I just applied for a job, you know, I just applied for a mortgage. Is my zip code going to be used against me? Is my social media profile, you know, that reveals my interests gonna be used against me. Is my race gonna be used against me? In this world, none of that happens, and you can focus on preparing for that job interview and finding the right house for you and your family, finding the right rental for you and your family.

Now, I think it's also a world where you can start a small business without fear that the simple fact that you're not connected to a bigger platform or a bigger brand won't be used against you, where you have a level playing field to win people over.

CINDY COHN
I think that's great. You know, leveling the playing field is one of the original things that we were hoping, you know, that digital technologies could do. It also makes me think of that old New Yorker thing, you know, on the internet, no one knows you're a dog.

ALVARO BEDOYA
(Laughs) Right.

CINDY COHN
In some ways I think the vision is on the internet. You know, again, I don't think that people should leave the other parts of their lives behind when they go on the internet. Your identity matters, but that it doesn't, the fact that you're a dog doesn't mean you can't play. I'm probably butchering that poor cartoon too much.

ALVARO BEDOYA
No, I don't. I don't think you are, but I don't know why it did, but it reminded me of one other thing, which is in this world, you, you go to a. Whether it's at home in your basement like I am now, you know, or in your car or at an office, uh, uh, at a business. And you have a shot at working with pride and dignity where every minute of your work isn't measured and quantified. Where you have the ability to focus on the work rather than the surveillance of that work and the judgments that other people might make around that minute surveillance and, and you can focus on the work itself. I think too often people don't recognize the strangeness of the fact that when you watch tv, when you watch a streaming site, when you watch cable, when you go shopping, all of that stuff is protected by privacy law. And yet most of us spend a good part of our waking hours working and there are. Really no federal, uh, uh, worker privacy protections. That, for me is, is one of the biggest gaps in our sectoral privacy system that we've yet to confront.

But the world that you wanted me to talk about definitely is a world where you can go to work and do that work with dignity and pride, uh, without minute surveillance of everything you.

CINDY COHN
Yeah. And I think inherent in that is this, you know, this, this observation that, you know, being watched all the time doesn't work as a matter of humanity, right? It's a human rights issue to be watched all the time. I mean, that's why when they build prisons, right, it's the panopticon, right? That's where that idea comes from, is this idea that people who have lost their liberty get watched all the time.

So that has to be a part of building this better future, a space where, you know, we’re not being watched all the time. And I think you're exactly right that we kind of have this gigantic hole in people's lives, which is their work lives where it's not only that people don't have enough freedom right now, it's actually headed in the other direction. I know this is something that we think about a lot, especially Jason does at EFF.

JASON KELLEY
Yeah, I mean we, we write quite a bit about Boss Ware. We've done a variety of research into Boss Ware technology. I wonder if you could talk a little bit about maybe like some concrete examples that you've seen where that technology is sort of coming to fruition, if you will. Like it's being used more and more and, and why we need to, to tackle it, because I think a lot of people probably, uh, listening to this aren't, aren't as familiar with it as they could be.

And at the top of this episode we heard you describe your favorite Dr. Seuss tale – about the bees and the watchers, and the watchers watching the watchers, and so on to absurdity. Now can you tell us why you think that’s such an important image?

ALVARO BEDOYA
I think it's a valuable metaphor for the fact that a lot of this surveillance software may not offer as complete a picture as employers might think it does. It may not have the effect that employers think it does, and it may not ultimately do what people want it to do. And so I think that anyone who is thinking about using the software should ask hard questions about ‘is this actually gonna capture what I'm being told it will capture? Does it account for the 20% tasks of my workers' jobs?’ So, you know, there's always an 80/20 rule and so, you know, as with, as with work, most of what you do is one thing, but there's usually 20% that's another thing. And I think there's a lot of examples where that 20%, like, you know, occasionally using the bathroom right, isn't accounted for by the software. And so it looks like the employee’s slacking, but actually they're just being a human being. And so I would encourage people to ask hard questions about the sophistication of the software and how it maps onto the realities of work.

JASON KELLEY
Yeah. That's a really accurate way for people to start to think about it because I think a lot of people really feel that. Um, if they can measure it, then it must be useful.

ALVARO BEDOYA
Yes!

JASON KELLEY
In my own experience, before I worked at EFF, I worked somewhere where, eventually, a sort of boss ware type tool was installed and it had no connection to the job I was doing.

ALVARO BEDOYA
That’s interesting.

JASON KELLEY
It was literally disconnected.

ALVARO BEDOYA:
Can you share the general industry?

JASON KELLEY
It was software. I worked as a, I was in marketing for a software company and um, I was remote and it was remote way before p the pandemic. So, you know, there's sort of, I think boss ware has increased probably during the pandemic. I think we've seen that because people are worried that if you're not in the office, you're not working.

ALVARO BEDOYA
Right.

JASON KELLEY
There's no evidence, boss wear can't give evidence that that's true. It can just give evidence in, you know, whether you're at your computer –

ALVARO BEDOYA
Right. Whether you're typing.

JASON KELLEY
Whether you're typing. Yeah. And what happened in my scenario without going into too much detail was that it mattered what window I was in. and it didn't always, at first it was just like, are you at your computer for eight hours? And then it was, are you at your computer in these specific windows for eight hours? And then it was, are you typing in those specific windows for eight hours? The screws kept getting twisted, right, until I was actually at my computer for 12 hours to get eight hours of ‘productive’ work in, as it was called.

And so, yeah, I left that job. Obviously, I work at EFF now for a reason. And is was one of the things that I remember when I started at EFF, part of what I like about what we do is that we think about people's humanity in what they're doing and how that interacts with technology.

And I think boss ware is one of those areas where it doesn't, um, because it, it is so common for an employer to sort of disengage from the employee and sort of think of them as like a tool. It's, it's an area where it's easy for to install something or try to install something where that happens. So I'm glad you're working on it. It's definitely an issue.

ALVARO BEDOYA
Well, I'm thinking about it, you know, and it's certainly something I, I care about and, and I think, I think my hope is, My hope is that, um, you know, the pandemic was horrific. Is horrific. My hope is that one of the realizations coming out of it from so many people going remote is the realization that particularly for some jobs, you know, uh, um, a lot of us are lucky to have these jobs where a lot of our time turns.

Being able to think clearly and carefully about a, about something, and that's a luxury. Um, but particularly for those jobs, my, my suspicion is for an even broader range of jobs that this idea of a workday where you sit down, work eight hours and sit up, you know, and, and that is the ideal workday I don't think that's a maximally productive day, and I think there's some really interesting trials around the four-day work week, and my hope is that, you know, when my kids are older, that there will be a recognition that working harder, staying up later, getting up earlier, is not the best way to get the best work from people. And people need time to think. They need time to relax. They need time to process things. And so that is my hope that that is one of the realizations around it. But you're exactly right, Jason, is that one of my concerns around this software is that there's this idea that if it can be measured, it must be important. And I think you use a great example, speaking in general here, that of software that may presume that if you aren't typing, you're not working, or if you're not in a window, you're not working, when actually you might be doing the most important work. You know, jotting down notes, organizing your thoughts, that lets you do the best stuff as it were.

Music transition

JASON KELLEY
I want to jump in for a little mid-show break to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. So a tip of the hat to them for their assistance.

Now back to our conversation with Alvaro Bedoya.

CINDY COHN
Privacy issues are of course near and dear to our hearts at EFF and I know that's really the world you come out of as well. Although your perch is a little, a little different right now. We came to the conclusion that we can't address privacy if we don't address competition and antitrust issues. And I think you've come someplace similar perhaps, and I'd love for you to talk about how you think privacy and questions around competition and antitrust intertwine.

ALVARO BEDOYA
So I will confess, I don't know if I have figured it out, but I can offer a few thoughts. First of all, I think that a lot of the antitrust claims are not what they seem to be. When companies talk about how important it is to have gatekeeping around app stores because of privacy and this is one of the reasons I support the bills, I think it's Blumenthal Blackburn bill to, um, to change the way app stores are, are run and, and, and kick the tires on that gatekeeping model because I am skeptical about a lot of those pro-privacy, anti-antitrust claims, that is one thing. On the other hand, I do think we need to think carefully about the rules that are put in place, backfiring against new entrants and small competitors. And I think a lot of legislators and policy makers in the US and Europe appreciate this and are getting this right and institute a certain set of rules for bigger companies and different ones for smaller ones, I think one of the ways this can go wrong is when it's just about the size of the company rather than the size of the user base.

I think that if you are, you know, suddenly of a hundred million users that you're not a small company, even if you have, you know, a small number of employees, but I, I do think that those concerns are real and that that policy makers and people in my role need to think about the costs of privacy compliance in a way that does not inadvertently create an unlevel playing field for, for small competitors.

I will confess that sometimes things that appear to be, uh, um, antitrust problems are privacy problems in that they reflect legal gaps around the sectoral privacy framework that unfortunately has yet to be updated. So I think I can give one example where there was the recent merger of, uh, Amazon and One Medical, and, well, I can't go into the antitrust analysis that may or may not have occurred at the commission. I wrote a statement on the completion of the merger, which highlighted a gap that we have around the anonymization rule in our health privacy law. For example, people think that HIPAA is actually the Health Information Privacy Act. It's not, it's actually the Health Insurance Portability Accountability Act. And I think that little piece of common wisdom speaks to a broader gap in our understanding of health privacy. So I think a lot of people think HIPAA will protect their data and that it won't be used in other ways by their doctor, by whoever it is that has their HIPAA protected data. Well, it turns out that in 2000 when HHS promulgated. The privacy rule in good faith, it had a provision that said, Hey, look, we want to encourage the improvement in health services. We want to encourage health research and we want to encourage public health. And so we're gonna say that if you remove these, you know, 18 identifiers from health data, that it can be used for other purposes and if you look at the rule that was issued, the justification for it is that they want to promote public health.

Unfortunately, they did not put a use restriction on that. And so now, if any, doctor's practice, anyone covered by HIPAA, and I'm not gonna go into the rabbit hole of who is and who isn't, but if you're covered by HIPAA, All they need to do is remove those identifiers from the data.

And HHS is unfortunately very clear that you can essentially do a whole lot of things that have nothing to do with healthcare as long as you do that. And what I wrote in my statement is that would surprise most consumers. Frankly, it surprised me when I connected the dots.

CINDY COHN
What I'm hearing here, which I think is really important is, first of all, we start off by thinking that some of our privacy problems are really due to antitrust concerns, but what we learn pretty quickly when we're looking at this is, first of all, privacy is used frankly, as a blocker for common sense reforms that we might need, that these giants come in and they say, well, we're gonna protect people's privacy by limiting what apps are in the app store. And, and we need to look closely at that because it doesn't seem to be necessarily true.

So first of all, you have to watch out for the kind of fake privacy argument or the argument that the tech giants need to be protected because they're protecting our privacy and we need to really interrogate that. And at the bottom of it, it often comes down to the fact that we haven't really protected people's privacy as a legal matter, right? We, we, We ground ourselves in Larry Lessig, uh, four pillars of change, right? Code, norms, laws, and markets. And you know, what they're saying is, well, we have to protect, you know, essentially what is a non-market, but the, the tech giants, that markets will protect privacy and so therefore we can't introduce more competition. And I think at the bottom of this, what we find a lot is that it's, you know, the law should be setting the baseline, and then markets can build on top of that. But we've got things a little backwards. And I think that's especially true in health. It's, it's, it's very front and center for those of us who care about reproductive justice, who are looking at the way health insurance companies are now part and parcel of other data analysis companies. And the Amazon/One Medical one is, is another one of those that unless we get the privacy law right, it's gonna be hard to get at some of these other problems.

ALVARO BEDOYA
Yeah. And those are the three things that I think a lot about first, that those propri arguments that seem to cut against, uh, competition concerns are often not what they seem.

Second, that we do need to take into account how one size fits all privacy rules could backfire in a way that hurts, uh, small companies, small competitors, uh, who are the lifeblood of, uh, innovation and employment frankly. And, and lastly, Sometimes what we're actually seeing are gaps in our sectoral privacy system.

CINDY COHN
One of the things that I know you've, you've talked about a little bit is, um, you're calling it a return to fairness, and that's specifically talking about a piece of the FTC’s authority. And I wonder if you could talk about that a little more and how you see that fitting into a, a better world.

ALVARO BEDOYA
Sure. One of the best parts of this job, um, was having this need and opportunity to immerse myself in antitrust. So as a Senate staffer, I did a little bit of work on the Comcast, uh, NBC merger against, against that merger, uh, for my old boss, Senator Franken. But I didn't spend a whole lot of time on competition concerns. And so when I was nominated, I, you know, quite literally, you know, ordered antitrust treatises and read them cover to cover.

CINDY COHN
Wonderful!

ALVARO BEDOYA
Well, sometimes it's wonderful and sometimes it's not. But in this case it was. And what you see is this complete two-sided story where on the one hand you have this really anodyne, efficiency-based description of antitrust, where it is about enforcing abstract laws and maximizing efficiency and the saying, you know antitrust is about protects competition, not competitors, and you so quickly lose sight of why we have antitrust laws and how we got them.

And so I didn't just read treatises on the law. I also read histories. And one of the things that you read and realize when you read those histories is that antitrust isn't about efficiency, antitrust is about people. And yes, it's about protecting competition, but the reason we have it is because of what happened to certain people. And so, you know, the Sherman Act, you listen to those floor debates, it is fascinating because first of all, everyone agrees as to what we want to do, what Congress wanted to do. Congress wanted to reign in the trust they wanted to reign in John Rockefeller, JP Morgan, the beef trust, the sugar trust, the steel trust. Not to mention, you know, the Rockefeller's Oil Trust. The most common concern on the floor of the Senate was what was happening to cattlemen because of concentration in meat packing plants and the prices they were getting when they brought their cattle to processors, and to market. And then you look at, uh, 1914, the Clayton Act again. There was outrage, true outrage about how those antitrust laws, you know, 10 out of the first 12 antitrust injunctions in our, in our country post-Sherman, were targeted at workers and not just any workers. They were targeted at rail car manufacturers in Pullman, where it was an integrated workforce and they were working extremely long hours for a pittance and wages, and they decided to strike.

And some of the first injunctions we saw in this country were used to. Their strike or how it was used against, uh, uh, I think they're called drayage men or dray men in New Orleans, port workers and dock workers in New New Orleans, who again, were working these 12 hour days for, for nothing in wages. And this beautiful thing happened in New Orleans where the entire city went on strike.

It was, I think it was 30 unions. It was like the typographical workers unions. And if you think that that refers to people typing on keyboards, it does. From the people typing on mechanical typewriters to the people, you know, unload loading ships in the dock of, in the port of New Orleans, everyone went on strike and they had this, this organization called the Amalgamated Working Men's Council. And um, and they went, they wanted a 10 hour, uh, uh, workday. They wanted overtime pay, and they wanted, uh, uh, union shops. They got two out of those three things. But, um, but I think it was the trade board was so unhappy with it that they, uh, persuaded federal prosecutors to sue under Sherman.

And it went before Judge Billings. And Judge Billings said, absolutely this is a violation of the antitrust laws. And the curious thing about Judge Billings decision is one of the first German decisions in a federal court, and he didn't cite for the proposition that the strike was a restraint on trade to restrain on trade law. He cited to much older decisions about criminal conspiracies and unions to justify his decision.

And so what I'm trying to say is over and over and over again, whenever, you know, you look at the actual history of antitrust laws, you know, it isn't about efficiency, it's about fairness. It is about how small competitors and working people, farmers, laborers, deserve a level playing field. And in 1890, 1914, 1936, 1950, this was what was front and center for Congress.

CINDY COHN
It's great to end with a deep dive into the original intent of Congress to protect ordinary people and fairness with antitrust laws, especially in this time when history and original intent are so powerful for so many judges. You know, it’s solid grounding for going forward. But I also appreciate how you mapped the history to see how that Congressional intent was perverted by the judicial branch almost from the very start.

This shows us where we need to go to set things right but also that it’s a difficult road. Thanks so much Alvaro.

JASON KELLEY
Well, it's a rare privilege to get to complain about a former employer directly to a sitting FTC commissioner. So that was a very enjoyable conversation for me. It's also rare to learn something new about Dr. Seuss and a Dr. Seuss story, which we got to do. But as far as actual concrete takeaways go from that conversation, Cindy, what did you pull away from that really wide ranging discussion?

CINDY COHN
It’s always fun to talk to Alvaro. I loved his vision of a life lived with dignity and pride as the goal of our fixed internet. I mean those are good solid north stars, and from them we can begin to see how that means that we use technology in a way that, for example, allows workers to just focus on their work. And honestly, while that gives us dignity, it also stops the kind of mistakes we’re seeing like tracking keystrokes, or eye contact as secondary trackers that are feeding all kinds of discrimination.

So I really appreciate him really articulating, you know, what are the kinds of lives we wanna have. I also appreciate his thinking about the privacy gaps that get revealed as technology changes and, and the, the story of healthcare and how HIPAA doesn't protect us in the way that we'd hoped to protect us, in part because I think HIPAA didn't start off at a very good place, but as things have shifted and say, you know, one medical is being bought by Amazon, suddenly we see that the presumption of who your insurance provider was and what they might use that information for, has shifted a lot, and that the privacy law hasn't, hasn't kept up.

So I appreciate thinking about it from, you know, both of those perspectives, both, you know, what the law gets wrong and how technology can reveal gaps in the law.

JASON KELLEY
Yeah. That really stood out for me as well, especially the parts where Alvero was talking about looking into the law in a way that he hadn't had to before. Like you say, because that is kind of what we do at EFF at least part of what we do. And it's nice to hear that we are sort of on the same page and that there are people in government doing that. There are people at EFF doing that. There are people all over, in different areas doing that. And that's what we have to do because technology does change so quickly and so much.

CINDY COHN
Yeah, and I really appreciate the deep dive he's done into antitrust law and, and revealing really the, the, the fairness is a deep, deep part of it. And this idea that it's only about efficiency and especially efficiency for consumers only. It's ahistorical. And that's a good thing for us all to remember since we, especially these days have a Supreme Court that is really, you know, likes history a lot and grounds and limits what it does in history. The history's on our side in terms of, you know, bringing competition law, frankly, to the digital age.

JASON KELLEY
Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member or donate, or look at hoodies, t-shirts, hats or other merch.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

MUSIC CREDITS

This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:

Lost track by airtone
Common ground by airtone
Probably shouldn’t by J Lang

EFF Sends Transition Memo on Digital Policy Priorities to New Administration and Congress

Par : Josh Richman
21 janvier 2025 à 10:30
Topics Include National Security Surveillance, Consumer Privacy, AI, Cybersecurity, and Many More

SAN FRANCISCO—Standing up for technology users in 2025 and beyond requires careful thinking about government surveillance, consumer privacy, artificial intelligence, and encryption, among other topics. To help incoming federal policymakers think through these key issues, the Electronic Frontier Foundation (EFF) has shared a transition memo with the Trump Administration and the 119th U.S. Congress. 

“We routinely work with officials and staff in the White House and Congress on a wide range of policies that will affect digital rights in the coming years,” said EFF Director of Federal Affairs India McKinney. “As the oldest, largest, and most trusted nonpartisan digital rights organization, EFF’s litigators, technologists, and activists have a depth of knowledge and experience that remains unmatched. This memo focuses on how Congress and the Trump Administration can prioritize helping ordinary Americans protect their digital freedom.”  

The 64-page memo covers topics such as surveillance, including warrantless digital dragnets, national security surveillance, face recognition technology, border surveillance, and reproductive justice; encryption and cybersecurity; consumer privacy, including vehicle data, age verification, and digital identification; artificial intelligence, including algorithmic decision-making, transparency, and copyright concerns; broadband access and net neutrality; Section 230’s protections of free speech online; competition; copyright; the Computer Fraud and Abuse Act; and patents. 

EFF also shared a transition memo with the incoming Biden Administration and Congress in 2020. 

“The new Congress and the Trump Administration have an opportunity to make the internet a much better place for users. This memo should serve as a blueprint for how they can do so,” said EFF Executive Director Cindy Cohn. “We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we always work for technology users.” 

For the 2025 transition memo: https://eff.org/document/eff-transition-memo-trump-administration-2025 

For the 2020 transition memo: https://www.eff.org/document/eff-transition-memo-incoming-biden-administration-november-2020

Contact: 
India
McKinney
Director of Federal Affairs
Maddie
Daly
Assistant Director of Federal Affairs

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF Launches Digital Rights Bytes to Answer Tech Questions that Bug Us All

Par : Josh Richman
31 octobre 2024 à 11:55
New Site Dishes Up Byte-Sized, Yummy, Nutritious Videos and Other Information About Your Online Life

SAN FRANCISCO—The Electronic Frontier Foundation today launched “Digital Rights Bytes,” a new website with short videos offering quick, easily digestible answers to the technology questions that trouble us all. 

“It’s increasingly clear there is no way to separate our digital lives from everything else that we do — the internet is now everybody's hometown. But nobody handed us a map or explained how to navigate safely,” EFF Executive Director Cindy Cohn said. “We hope Digital Rights Bytes will provide easy-to-understand information people can trust, and an entry point for thinking more broadly about digital privacy, freedom of expression, and other civil liberties in our digital world.” 

Initial topics on Digital Rights Bytes include “Is my phone listening to me?”, “Why is device repair so costly?”, “Can the government read my text messages?” and others. More topics will be added over time. 

For each topic, the site provides a brief animated video and a concise, layperson’s explanation of how the technology works. It also provides advice and resources for what users can do to protect themselves and take action on important issues. 

EFF is the leading nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology Development. Its mission is to ensure that technology supports freedom, justice and innovation for all people of the world. 

For the Digital Rights Bytes website: https://www.digitalrightsbytes.org/

Contact: 
Jason
Kelley
Activism Director

EFF and IFPTE Local 20 Attain Labor Contract

Par : Josh Richman
16 octobre 2024 à 11:17
First-Ever, Three-Year Pact Protects Workers’ Pay, Benefits, Working Conditions, and More

SAN FRANCISCO—Employees and management at the Electronic Frontier Foundation have achieved a first-ever labor contract, they jointly announced today.  EFF employees have joined the Engineers and Scientists of California Local 20, IFPTE.  

The EFF bargaining unit includes more than 60 non-management employees in teams across the organization’s program and administrative staff. The contract covers the usual scope of subjects including compensation; health insurance and other benefits; time off; working conditions; nondiscrimination, accommodation, and diversity; hiring; union rights; and more. 

"EFF is its people. From the moment that our staff decided to organize, we were supportive and approached these negotiations with a commitment to enshrining the best of our practices and adopting improvements through open and honest discussions,” EFF Executive Director Cindy Cohn said. “We are delighted that we were able to reach a contract that will ensure our team receives the wages, benefits, and protections they deserve as they continue to advance our mission of ensuring that technology supports freedom, justice and innovation for all people of the world.” 

“We’re pleased to have partnered with EFF management in crafting a contract that helps our colleagues thrive both at work and outside of work,” said Shirin Mori, a member of the EFF Workers Union negotiating team. “This contract is a testament to creative solutions to improve working conditions and benefits across the organization, while also safeguarding the things employees love about working at EFF. We deeply appreciate the positive working relationship with management in establishing a strong contract.” 

The three-year contract was ratified unanimously by EFF’s board of directors Sept. 18, and by 96 percent of the bargaining unit on Sept. 25. It is effective Oct. 1, 2024 through Sept. 30, 2027. 

EFF is the largest and oldest nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.  

The Engineers and Scientists of California Local 20, International Federation of Professional and Technical Engineers, is a democratic labor union representing more than 8,000 engineers, scientists, licensed health professionals, and attorneys at PG&E, Kaiser Permanente, the U.S. Environmental Protection Agency, Legal Aid at Work, numerous clinics and hospitals, and other employers throughout Northern California.  

For the contract: https://ifpte20.org/wp-content/uploads/2024/10/Electronic-Frontier-Foundation-2024-2027.pdf 

For more on IFPTE Local 20: https://ifpte20.org/ 

Podcast Episode Rerelease: So You Think You’re A Critical Thinker

Par : Josh Richman
11 octobre 2024 à 03:01

This episode was first released in March 2023.

With this year’s election just weeks away, concerns about disinformation and conspiracy theories are on the rise.

We covered this issue in a really enlightening talk in March 2023 with Alice Marwick, the director of research at Data & Society, and previously the cofounder and principal researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill.

We talked with Alice about why seemingly ludicrous conspiracy theories get so many followers, and when fact-checking does and doesn’t work. And we came away with some ideas for how to identify and leverage people’s commonalities to stem disinformation, while making sure that the most marginalized and vulnerable internet users are still empowered to speak out.

We thought this is a good time to re-publish that episode, in hopes that it might help you make some sense of what you might see and hear in the next few months.

If you believe conversations like this are important, we hope you’ll consider voting for How to Fix the Internet in the “General - Technology” category of the Signal Awards’ 3rd Annual Listener's Choice competition. Deadline for voting is Thursday, Oct. 17.

Vote now!

This episode was first published on March 21, 2023.

The promise of the internet was that it would be a tool to melt barriers and aid truth-seekers everywhere. But it feels like polarization has worsened in recent years, and more internet users are being misled into embracing conspiracies and cults.

Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

From QAnon to anti-vax screeds to talk of an Illuminati bunker beneath Denver International Airport, Alice Marwick has heard it all. She has spent years researching some dark corners of the online experience: the spread of conspiracy theories and disinformation. She says many people see conspiracy theories as participatory ways to be active in political and social systems from which they feel left out, building upon beliefs they already harbor to weave intricate and entirely false narratives.  

Marwick speaks with EFF’s Cindy Cohn and Jason Kelley about finding ways to identify and leverage people’s commonalities to stem this flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out.  

In this episode you’ll learn about:  

  • Why seemingly ludicrous conspiracy theories get so many views and followers  
  • How disinformation is tied to personal identity and feelings of marginalization and disenfranchisement 
  • When fact-checking does and doesn’t work  
  • Thinking about online privacy as a political and structural issue rather than something that can be solved by individual action 

Alice Marwick is director of research at Data & Society; previously, she was an Associate Professor in the Department of Communication and cofounder and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of Foreign Policy magazine’s 2017 Global Thinkers. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. She earned a political science and women's studies bachelor's degree from Wellesley College, a Master of Arts in communication from the University of Washington, and a PhD in media, culture and communication from New York University. 

Transcript

ALICE MARWICK
I show people these TikTok videos that are about these kind of outrageous conspiracy theories, like that the Large Hadron Collider at CERN is creating a multiverse. Or that there's, you know, this pyramid of tunnels under the Denver airport where they're trafficking children and people kinda laugh at them.

They're like, this is silly. And then I'm like, this has 3 million views. You know, this has more views than probably most of the major news stories that came out this week. It definitely has more views than any scientific paper or academic journal article I'll ever write, right? Like, this stuff has big reach, so it's important to understand it, even if it seems kind of frivolous or silly, or, you know, self-evident.

It's almost never self-evident. There's always some other reason behind it, because people don't do things arbitrarily. They do things that help them make sense of their lives. They give their lives meaning these are practices that people engage in because it means something to them. And so I feel like my job as a researcher is to figure out, what does this mean? Why are people doing this?

CINDY COHN
That’s Alice Marwick. The research she’s talking about is something that worries us about the online experience – the spread of conspiracy theories and misinformation. The promise of the internet was that it would be a tool that would melt barriers and aid truth-seekers everywhere. But sometimes it feels like polarization has worsened, and Internet users are misled into conspiracies and cults. Alice is trying to figure out why, how – and more importantly, how to fix it.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy.

This is our podcast series: How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to fix the internet. We're trying to make our digital lives better. EFF spends a lot of time warning about all the ways that things could go wrong and jumping into the fight when things do go wrong online, but what we'd like to do with this podcast is to give ourselves a vision of what the world looks like if we start to get it right.
JASON KELLEY
Our guest today is Alice Marwick. She’s a researcher at the Center for Information, Technology and Public Life at the University of North Carolina. She does qualitative research on a topic that affects everyone’s online lives but can be hard to grasp outside of anecdotal data – the spread of conspiracy theories and disinformation online.

This is a topic that many of us have a personal connection to – so we started off our conversation with Alice by asking what drew her into this area of research.

ALICE MARWICK
So like many other people I got interested in missing disinformation in the run up to the 2016 election. I was really interested in how ideas that had formerly been like a little bit subcultural and niche in far right circles were getting pushed into the mainstream and circulating really wildly and widely.

And in doing that research, it sort of helped me understand disinformation as a frame for understanding the way that information ties into marginalization, I think more broadly and disinformation is often a mechanism by which people who are marginalized the stories that the dominant culture tells about those marginalized people, the way that it circulates.

JASON KELLEY
I think it's been a primary focus for a lot of people in a lot of ways over the last few years. I know I have spent a lot of time on alternative social media platforms over the last few years because I find the topics kind of interesting to figure out what's happening there. And also because I have a friend who has kind of entered that space and, uh, I like to learn, you know, where the information that he's sharing with me comes from, essentially, right. But one thing that I've been thinking about with him and and with other folks is, is there something that happened to him that made him kind of easily radicalized, if you will? And I, I don't think that's a term that, that you recommend using, but I think a lot of people just assume that that's something that happens.

That there are people who, um, you know, grew up watching the X-files or something and ended up more able to fall into these misinformation and disinformation traps. And I'm wondering if that's, if that's actually true. It seems like from your research, it's not.

ALICE MARWICK
It's not, and that's because there's a lot of different things that bring people to disinformation, because disinformation is really deeply tied to identity in a lot of ways. There's lots of studies showing that more or less, every American believes in at least one conspiracy theory, but the conspiracy theory that you believe in is really based on who you are.

So in some cases it is about identity, but I think the biggest misconception about [00:04:00] disinformation is that the people who believe it are just completely gullible and that they don't have any critical thinking skills and that they go on YouTube and they watch a video or they listen to a podcast and all of a sudden their entire mindset shifts.

CINDY COHN
So why is radicalization not the right term? How do you think about this term and why you've rejected it?

ALICE MARWICK
The whole idea of radicalization is tied up in this countering violent extremism movement that is multinational, that is tied to this huge surveillance apparatus, to militarization, to, in many ways, like a very Islamophobic idea of the world. People have been researching why individuals commit political violence for 50 years and they haven't found any individual characteristics that make someone more susceptible to doing something violent, like committing a mass shooting or participating in the January 6th insurrection, for example. What instead that we see is that there's a lot of different puzzle pieces that can contribute to whether somebody takes on a set, an ideology, and whether they commit acts of violence and service of that  ideology.

And I think the thing that's frustrating to researchers is sometimes the same thing can have two completely different effects in people. So there's this great study of women in South America who were involved in guerilla warfare, and some of those women, when they had kids, they were like, oh, I'm not gonna do this anymore.

It's too dangerous. You know, I wanna focus on my family. But then there was another set of women that when they had kids, they felt they had more to lose and they had to really contribute to this effort because it was really important to the freedom of them and their children.

So when you think about radicalization, there's this real desire to have this very simplistic pathway that everybody kind of just walks along and they end up a terrorist. But that's just not the way the world works. 

The second reason I don't like radicalization is because white supremacy is baked into the United States from its inception. And white supremacist ideas and racist ideas are pretty foundational. And they're in all kinds of day-to-day language and media and thinking. And so why would we think it's radical to be, for example, anti-black or anti-trans when anti-blackness and anti-transness have like these really long histories?

CINDY COHN
Yeah,  I think that's right. And there is a way in which radicalization makes it sound as if, um, that's something other than our normal society. Iin many instances, that's not actually what's going on.

There's pieces of our society, the water we swim in every day that are getting, um, that are playing a big role in some of this stuff that ends up in a very violent place. And so by calling it radicalization, we're kind of creating an other that we're not a part of that I think will mean that we might miss some of the, some of the pieces of this.

ALICE MARWICK
Yeah, and I think that when we think about disinformation, the difference between a successful and an unsuccessful disinformation campaign is often whether or not the ideas exist in the culture already. One of the reasons QAnon, I think, has been so successful is that it picks up a lot of other pre circulating conspiracy theories.

It mixes them with anti-Semitism, it mixes them with homophobia and transphobia, and it kind of creates this hideous concoction, this like potion that people drink that reinforces a lot of their preexisting beliefs. It's not something that comes out of nowhere. It's something that's been successful precisely because it reinforces ideas that people already had.

CINDY COHN
I think the other thing that I saw in your research that might have been surprising or at least was a little surprising to me, is how participatory Q-Anon is.

You took a look at some of the Q-Anon. Conversations, you could see people pulling in pieces of knowledge from other things, you know, flight patterns and, and unexplained deaths and other things. It's something that they're co-creating, um, which I found fascinating.

ALIVE MARWICK
It's really similar to the dynamics of fandom in a lot of ways. You know, any of us who have ever participated in, like, a Usenet group or a subreddit about a particular TV show, know that people love putting theories together. They love working together to try to figure out what's going on. And obviously we see those same dynamics at play in a lot of different parts of internet culture.

So it's about taking the participatory dynamics of the internet and sort of mixing them with what we're calling conspiratorial literacy, which is sort of the ability to assemble these narratives from all these disparate places to kind of pull together, you know, photos and Wikipedia entries and definitions and flight paths and you know, news stories into these sort of n narratives that are really hard to make coherent sometimes, ‘cause they get really complicated.

But it's also about a form of political participation. I think there's a lot of people in communities where disinformation is rampant, where they feel like talking to people about Q-Anon or anti-vaxing or white supremacy is a way that they can have some kind of political efficacy. It's a way for them to participate, and sometimes I think people feel really disenfranchised in a lot of ways.

JASON KELLEY
I wonder because you mentioned internet culture, if some of this is actually new, right? I mean, we had satanic panics before and something I hear a lot of in various places is that things used to be so much simpler when we had four television channels and a few news anchors and all of them said the same thing, and you couldn't, supposedly, you couldn't find your way out into those other spaces. And I think you call this the myth of the epistemically consistent past. Um, and is that real? Was that a real time that actually existed? 

ALICE MARWICK
I mean, let's think about who that works for, right? If you're thinking about like 1970, let's say, and you're talking about a couple of major TV networks, no internet, you know, your main interpersonal communication is the telephone. Basically, what the mainstream media is putting forth is the narrative that people are getting.

And there's a very long history of critique of the mainstream media, of putting forth a narrative that's very state sponsored, that's very pro-capitalist, that writes out the histories of lots and lots of different types of people. And I think one of the best examples of this is thinking about the White Press and the Black Press.

And the Black Press existed because the White Press didn't cover stories that were of interest to the black community, or they strategically ignored those stories. Like the Tulsa Race massacre, for example, like that was completely erased from history because the white newspapers were not covering it.

So when we think about an. Epistemically consistent past, we're thinking about the people who that narrative worked for.

CINDY COHN
I really appreciate this point. To me, what was exciting about the internet and, you know, I'm a little older. I was alive during the seventies, um, and watched Walter Cronkite and, you know, this idea that, you know, old white guys in New York get, decide what the rest of us see, which is, that's who ran the networks, right.

That, that, you know, and maybe we had a little pbs, so we got a little Sesame Street too. 

But the promise of the Internet was that we could hear from more and more diverse voices, and reduce the power of those gatekeepers. What is scary is that some people are now pretty much saying that the answers to the problems of today’s Internet is to find four old white guys and let them decide what all the rest of us see again.    

ALICE MARWICK
I think it's really easy to blame the internet for the ills of society, and I, I guess I'm a digital critic, but I'm ultimately, I love the internet, like I love social media. I love the internet. I love online community. I love the possibilities that the internet has opened up for people. And when I look at the main amplifiers of disinformation, it's often politicians and political elites whose platforms are basically independent of the internet.

Like people are gonna cover, you know, leading politicians regardless of what media they're covering them with. And when you look at something like the lies around the Dominion voting machines, like, yes, those lies start in these really fringy internet communities, but they're picked up and amplified incredibly quickly by mainstream politicians.

And then they're covered by mainstream news. So who's at fault there? I think that blaming the internet really ignores the fact that there's a lot of other players here, including the government, you know, politicians, these big mainstream media sources. And it's really convenient to blame all social media or just the entire internet for some of these ills, but I don't think it's accurate.

CINDY COHN
Well, one of the things that I saw in your research and, and our friend, Yochai Benkler has done in a lot of things is the role of amplifiers, right? That these, these these places where people, you know, agree about things that aren't true and, and converse about things that aren't true. They predate the internet, maybe the internet gave a little juice to them, but what really gives juice to them is these amplifiers who, as I think you, you rightly point out, are some of the same people who were the mainstream media controllers in that hazy past of yore, um, I think that if this stuff never makes it to more popular amplifiers. I don't think it becomes the kind of thing that we worry about nearly so much.

ALICE MARWICK
Yeah, I mean, when I was looking at white supremacist disinformation in 2017,  someone I spoke with pointed out that the mainstream media is the best recruitment tool for white supremacists because historically it's been really hard for white supremacists to recruit. And I'm not talking about like historically, like in the thirties and forties, I'm talking about like in the eighties and nineties when they had sort of lost a lot of their mainstream political power.

It was very difficult to find like-minded people, especially if people were living in places that were a little bit more progressive or were multiracial. Most people, in reading a debunking story in the Times or the Post or whatever, about white supremacist ideas are going to disagree with those ideas.

But even if one in a thousand believes them and is like, oh wow, this is a person who's spreading white supremacist ideas, I can go to them and learn more about it. That is a far more powerful platform than anything that these fringe groups had. in the past, and one of the things that we've noticed in our research is that often conspiracy theories go mainstream precisely because they're being debunked by the mainstream media

CINDY COHN
Wow. So there's two kinds of amplifiers. There's the amplifiers who are trying to debunk things and accidentally perhaps amplify. But there are, there are people who are intentional amplifiers as well, and that both of them have the same effect, or at least both of them can spread the misinformation.

ALICE MARWICK
Yeah. I mean, of course, debunking has great intentions, right? We don't want horrific misinformation and disinformation to go and spread unchecked. But one of the things that we noticed when we were looking at news coverage of disinformation was that a lot of the times the debunking aspect was not as strong as we would've expected.

You know, you would expect a news story saying, this is not true, this is false, the presumptions are false. But instead, you'd often get these stories where they kind of repeated the narrative and then at the end there was, you know, this is incorrect. And the false narrative is often much more interesting and exciting than whatever the banal truth is.

So I think a lot of this has to do with the business model of journalism, right? There's a real need to comment on everything that comes across Twitter, just so that you can get some of the clicks for it. And that's been really detrimental, I think, to. journalists who have the time and the space to really research things and craft their pieces.

You know, it's an underpaid occupation. They're under a huge amount of economic and time pressure to like get stories out. A lot of them are working for these kind of like clickbaity farms that just churn out news stories on any hot topic of the day. And I think that is just as damaging and dangerous as some of these social media platforms.

JASON KELLEY
So when it comes to debunking, there's a sort of parallel, which is fact checking. And, you know, I have tried to fact check people, myself, um, individually. It doesn't seem to work. Does it work when it's, uh, kind of built into the platform as we've seen in different, um, in different spaces like Facebook or Twitter with community notes they're testing out now?

Or does that also kind of amplify it in some way because it just serves to upset, let's say, the people who have already decided to latch onto the thing that is supposedly being fact checked.

ALICE MARWICK
I think fact checking does work in some instances. If it's about things that people don't already have, like a deep emotional attachment to. I think sometimes also if it's coming from someone they trust, you know, like a relative or a close friend, I think there are instances in which it doesn't get emotional and people are like, oh, I was wrong about that, that's great. And then they move on. 

When it's something like Facebook where, you know, there's literally like a little popup saying, you know, this is untrue. Oftentimes what that does is it just reinforces this narrative that the social platforms are covering things up and that they're biased against certain groups of people because they're like, oh, Facebook only allows for one point of view.

You know, they censor everybody who doesn't believe X, Y, or Z. And the thing is that I think both liberals and conservatives believe that, obviously the narrative that social platforms censor conservatives is much stronger. But if you look at the empirical evidence, conservative stories perform much better on social media, specifically Facebook and Twitter, than do liberal stories.

So it, it's kind of like, it makes nobody happy. I don't think we should be amplifying, especially extremist views or views that are really dangerous. And I think that what you wanna do is get rid of the lowest hanging fruit. Like you don't wanna convert new people to these ideas like you, there might be some people who are already so enmeshed in some of these communities that it's gonna be hard for them to find their way out. But let's try to minimize the number of people who are exposed to it.

JASON KELLEY
That's interesting. It sounds like there are some models of fact checking that can help, but it really more applies to the type of information that's being, uh, fact checked than, than the specific way that the platform kind of sets it up. Is that what I'm hearing? Is that right?

ALICE MARWICK
Yeah, I mean, the problem is with a lot of, a lot of people online, I bet if you ask 99 people, if they consider themselves to be critical thinkers, 95 would say, yes, I'm a critical thinker. I'm a free thinker.

JASON KELLEY
A low estimate, I'm pretty sure.

ALICE MARWICK
A low estimate. So let's say you ask a hundred people in 99 say they're critical thinkers. Um, you know, I, I interview a lot of people about who have sort of what we might call unusual beliefs, and they all claim that they do fact checking and that they, when they hear something, they want to see if it's true.

And so they go and read other perspectives on it. And obviously, you know, they're gonna tell the researcher what they think I wanna hear. They're not gonna be like, oh, I saw this thing on Facebook and then I, like, spread it to 2000 people. And then it, you know, it turned out it was false. Um, but especially in the communities like Q-Anon, or anti-vaxxers, they already think of themselves as like researchers.

A lot of people who are into conspiracy theories think of themselves as researchers. That's one of their identities. And they spend quite a bit of time going down rabbit holes on the internet, looking things up and reading about it. And it's almost like a funhouse mirror held up to academic research because it is about the pleasure of learning, I think, and the joy of sort of educating yourself and these sort of like autodidactic processes where people can kind of learn just for the fun of learning. Um, but then they're doing it in a way that's somewhat divorced from what I would call sort of empirical standards of data collection or, you know, data assessment.

CINDY COHN
So, let's flip it around for a second. What does it look like if we are doing this right? What are the things that we would see in our society and in our conversations that would indicate that we're, we're kind of on the right path, or that we're, we're addressing this?

ALICE MARWICK
Well, I mean, the problem is this is a big problem. So it requires a lot of solutions. A lot of different things need to be worked on. You know, the number one thing I think would be toning down, you know, violent political rhetoric in general. 

Now how you do that, I'm not sure. I think it comes from, you know, there's this kind of window of discourse that's open that I think needs to be shut, where maybe we need to get back to slightly more civil levels of discourse. That's a really hard problem to solve. In terms of the internet, I think right now there's been a lot of focus on the biggest social media sites, and I think that what's happening is you have a lot of smaller social sites and it's much more difficult to play whack-a-Mole with a hundred different platforms than it is with three.

CINDY COHN
Given that we think that a pluralistic society is a good thing and we shouldn't all be having exactly the same beliefs all the time. How do we nurture that diversity without, you know, without the kind of violent edges? Or is it inevitable? Is there a way that we can nurture a pluralistic society that doesn't get to this us versus them, what team are you on kind of approach that I think underlies some of the spilling into violence that we've seen?

ALICE MARWICK
This is gonna sound naive, but I do think that there's a lot more commonalities between people than there are differences. So I interviewed a woman who's a conservative evangelical anti-vaxxer last week, and you. She and I don't have a lot in common in any way, but we had, like, a very nice conversation and one of the things that she told me is be she has this one particular interest that's brought her into conversation with a lot of really liberal people.

And so because she's interacted with a lot of them, she knows that they're not like demonic or evil. She knows they're just people and they have really different, they have really different opinions on a lot of really serious issues, but they're still able to sort of chat [00:32:00] about the things that they do care about.

And I think that if we can trace those lines of inclusion and connectivity between people, I think that's much, that's a much more positive, I think, area for growth than it is just constantly focusing on the differences. And that's easy for me to say as a white woman, right? Like it's much harder to deal with these differences if the difference in question is that the person thinks you're, you know, genetically inferior or that you shouldn't exist.

Those are things that are not easy. You can't just kumbaya your way out of those kinds of things. And in that case, I think we need to center the concerns of the most vulnerable and of the most marginalized, and make sure they're the ones whose voices are getting heard and their concerns are being amplified, which is not always the case, unfortunately.

JASON KELLEY
So let's say that we got to that point and um, you know, the internet space that you're on isn't as polarized, but it's pluralistic. Can you describe a little bit about what that feels like in your mind?

ALICE MARWICK
I think one thing to remember is that most people don't really care about politics. You know, a lot of us are kind of Twitter obsessed and we follow the news and we see our news alerts come up on our phone and we're like, Ooh, what just happened? Most people don't really care about that stuff. If you look at a site like Reddit, which gets a bad rap, but I think Reddit is just like a wonderful site for a lot of reasons.

It's mostly focused around interest-based communities, and the vast, vast majority of them are not about politics. They're about all kinds of other things. You know very mundane stuff. Like you have a dog or a cat, or you like the White Lotus and you wanna talk about the finale. Or you, you know, you live in a community and you want to talk about the fact that they're building a new McDonald's on like Route Six or whatever.

Yes, in those spaces you'll see people get into spats and you'll see people get into arguments and in those cases, there's usually some community moderation, but generally I think a lot of those communities are really healthy and positive. The moderators put forth like these are the norms.

And I think it's funny, I think some people would say Reddit uplifting, but I think you see the same thing in some Facebook groups as well, um, where you have people who really love, like quilting or I'm in dozens and dozens of Facebook groups on all kinds of weird things.

Like, “I found this weird thing at a thrift store,” or “I found this painting, you know, what can you tell me about it?” And I get such a kick out of seeing people from all these walks of life come together and talk about these various interests. And I do think that. You know, that's the utopian ideal of the internet that I think got us all so into it in the eighties and nineties.

This idea that you can come together with people and talk about things that you care about, even if you don't have anyone in your local immediate community who cares about those same things, and we've seen over and over that, that can be really empowering for people. You know, if you're an LGBTQ person in an area where there aren't that many other LGBTQ people, or if you're a black woman and you're the only black woman at your company, you know, you can get resources and support for that.

If you have an illness that isn't very well understood, you know, you can do community education on that. So, You know, these pockets of the internet, they exist and they're pretty big. And when we just constantly focus on this small minority of people who are on Twitter, you know, yelling at each other about stuff, I think it really overlooks the fact that so much of the internet is already this place of like enjoyment and, you know, hope.

CINDY COHN
Oh, I, that is so right and so good to be reminded of, um, that, that, that it's not that we have to fix the internet, it's that we have to grow the part of the internet  that never got broken. Right. That is fixed. 

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.


CINDY COHN
Now back to our conversation with Alice Marwick. In addition to all of her fascinating research on disinformation that we’ve been talking about so far, Alice has also been doing some work on another subject very near and dear to our hearts here at EFF – privacy.

Alice has a new book coming out in May 2023 called The Private is Political – so of course we couldn’t let her go without talking about that. 

ALICE MARWICK
I wanted to look at how you can't individually control privacy anymore because all of our privacy is networked because of social media and big data. We share information about each other, information about us as collected by all kinds of entities.

You know, you can configure your privacy settings till the cows come home, but it's not gonna change whether your photo gets swept up in, you know, some AI that then uses it for other kinds of purposes. And the second thing is to think about privacy as a political issue that has big impacts on everyone's lives, especially people who are marginalized in other areas.

I interviewed, oh, people from all kinds of places and spaces with all sorts of identities, and there's this really big misconception that people don't care about privacy. But people care very deeply about privacy and the way that they. Show that care  manifest in like so many different kinds of creative ways.

And so I'm hoping, I'm looking forward to sharing the stories of the people I spoke with.

CINDY COHN
That's great. Can you tell us one or I, I don't wanna spoil it, but -

ALICE MARWICK
Yeah, no. So I spoke with Jazz in North Carolina. These are all pseudonyms. And Jazz is an atheist, gender queer person, and they come from a pretty conservative Southern Baptist family and they're also homeless. They have a child who lives with their sister and they get a little bit of help from their family, like, not a lot, but enough that it can make the difference between whether they get by or not.

So what they did is they created two completely different sets of internet accounts. They have two Facebooks, two Twitters, two email addresses. Everything is different and it's completely firewalled. So on one, they use their preferred name and their pronouns. On the other, they use the pronouns they were assigned at birth and the name that their oarents gave them. And so the contrast between the two was just extreme. And so  Jazz said that they feel like their real, their Facebook page that really reflects them, that's their “me” page. That's where they can be who they really are because they have to kind of cover up who they are in so many other areas of their lives.

So they get this sort of big kick out of having this space on the internet where they can be like fiery and they can talk about politics and gender and things that they care about, but they have a lot to lose if the, if that, you know, seeps into their other life. So they have to be really cognizant of things like who does Facebook recommend that you friend, you know, who might see my other email address, who might do a Google search for my name?

And so I call this privacy work. It's the work that all of us do to maintain our privacy and we all do it. Um, and, but it's just much more intense for some kinds of people. Um, and so I see in jazz, you know, a lot of these themes, somebody who is. Suffering from intersectional forms of marginalization, but is still kind of doing the best they can.

And, you know, moving forward in the world, somebody who's being very creative with the internet, they're using it in ways that none of the designers or technologists ever intended, and they're helping it work for them, but they're also not served well by these technologies because they don't have the options to set the technologies up in ways that would fit their life or their needs.

Um, and so what I'm really calling for here is to, rather than thinking about privacy as individual, as something we each have to solve, as seeing it as a political and a structural problem that cannot be solved by individual responsibility or individual actions.

CINDY COHN
I so support that. That is certainly what we've experienced in the world as well, you know, the fight against the Real Names policy, say at Facebook, which, which really impacted, um, LGBTQ and trans community, especially because people are, they're changing their names, right? And that's important.

This real names policy, you know, first of all it's based on not good science. This idea that if you attach people's names to what they say, they will behave better. Which is, you know, belied by all of Facebook. Um, and, and, you know, it doesn't have any science behind it at all. But also these negative effects for, for, for people who, you know, for safety, you know, we work with a lot of domestic violence victims, you know, being able to separate out. One identity from another is tremendously important. And, and again, can, can matter for people's very lives. Or it could just be like, you know, when I'm Cindy at the dog park, I, I, I'm not interested in being, you know, Cindy, who's the ED of EFF, and being able to segment out your life and show up as, as different people, like, there's, there's a lot of power in that, even if it's not, you know, um, necessary to save your life.

ALICE MARWICK
Yeah, absolutely. Sort of that, that ability to maintain our social roles and to play different aspects of ourselves at different times. That's like a very human thing, and that's sort of fundamental to privacy. It's what parts of yourself do you wanna reveal at any given time. And when you have these huge sites like Facebook where they want a real name and they want you to have a persistent identity, it makes that really difficult.

Whereas sites like Reddit where you can have a pseudonym and you can have 12 accounts and nobody cares, and the site is totally designed to deal with that. You know, that works a lot better with how most people, I think, want to use the internet.

CINDY COHN
What other things do you think we can do? I mean, I'm assuming that we need some legal support here as well as technical, um, uh, support for, uh, more private internet, really More privacy protective internet.

ALICE MARWICK
I mean, we need comprehensive data privacy laws.

CINDY COHN
Yeah.

ALICE MARWICK
The fact that every different type of personal information is governed differently and some aren't governed at all. The fact that your email is not private, that, you know, anything you do through a third party is not private, whereas your video store records are private.

That makes no sense whatsoever. You know, it's just this complete amalgam. It doesn't have any underlying principle whatsoever. The other thing I would say is data brokers. We gotta get 'em out. We gotta get rid of them. You shouldn't be able to collect data in one for one purpose and then use it for God knows how many other purposes.

I think, you know, I was very happy under the Obama administration to see that the FTC was starting to look into data brokers. It seems like we lost a lot of that energy during the Trump administration, but you know, to me they're public enemy number one. Really don't like 'em.

CINDY COHN
We are with you.  And you know this isn’t new – as early as 1973 the federal government developed  something called the Fair Information Practice Principles that included recognizing that it wasn’t fair to collect data for one purpose and then use it for another without meaningful consent – but that’s the central proposition that underlies the data broker business model. I appreciate that your work confirms that those ideas are still good ones.  

ALICE MARWICK
Yeah, I think there's sort of a group of people doing critical privacy critical surveillance studies, um, a more diverse group of people than we've typically seen studying privacy. For a long time it was just sort of the domain of, you know, legal scholars and computer scientists. And so now that it's being sort of opened up to qualitative analysis and sociology and other forms, you know, I think we're starting to see a much more comprehensive understanding, which hopefully at some point will, you know, affect policy making and technology design as well.

CINDY COHN
Yeah, I sure hope so. I mean, I think we're in a time when our US Supreme Court is really not grappling with privacy harms and is effectively making it harder and harder to at least use the judicial remedies to try to address privacy harm. So, you know, this development of the rest of society and people's thinking about eventually, I think, will leak over into, into the judicial side.

But it's one of the things that a fixed internet would give us is the ability to have actual accountability for privacy harms at a level that much better than what we have now. And the other thing I hear you really developing out is that maybe the individual model, which is kind of inherent in a lot of litigation, isn't really the right model for thinking about how to remedy all of this either.

ALICE MARWICK
Well, a lot of it is just theatrical, right? It reminds me of, you know, security theater at the airport. Like the idea that by clicking through a 75-page, you know, terms of service change that's written at, you know, a level that would require a couple of years of law school, that it would take years if you spent, if you actually sat and read those, it would take up like two weeks of your life every year.

Like that is just preposterous. Like, nobody would sit and be like, okay, well here's a problem. What's the best way to solve it? It's just a loophole that allows companies to get away with all kinds of things that I think are, you know, unethical and immoral by saying, oh, well we told you about it.

But I think often what I hear from people is, well, if you don't like it, don't use it. And that's easy to say if you're talking about something that is, you know, an optional extra to your life. But when we're talking about the internet, there aren't other options. And I think what people forget is that the internet has replaced a lot of technologies that kind of withered away. You know, I've driven across country three times, and the first two times was kind of pre-mobile internet or a pre, you know, ubiquitous internet. And you had a giant road atlas in your car. Every gas station had maps and there were payphones everywhere. You know, now most payphones are gone, you go to a gas station, you ask for directions, they're gonna look at you blankly, and no one has a road atlas. You know, there are all these infrastructures that existed pre-internet that allowed us to exist without smartphones in the internet. And now most of those are gone. What are you supposed to do if you're in college and you're not using, you know, at the very least, your course management system, which is probably already, you know, collecting information on you and possibly selling it to a third party.

You can't pass your class. If you're not joining your study group, which might be on Facebook or any other medium, or WhatsApp or whatnot. Like, you can't communicate with people. It's absolutely ridiculous that we're just saying, oh, well, if you don't like it, don't use it. Like you don't tell people, you know.

If you're being targeted by like a murderous sociopath, oh, just don't go outside, right? Just stay inside all the time. That's just not, it's  terrible advice and it's not realistic.

CINDY COHN
No, I think that is true and certainly trying to find a job. I mean,  there are benefits to the fact that all of this stuff is networked, but it really does shine a light on the fact that, that this terms of service approach to things as if this is a contract, like a freely negotiated contract like I learned in law school with two equal parties, having a negotiation and coming to a meeting of the minds like this is, it's a whole other planet from that approach.

And to try to bring that frame to, you know, whether you enforce those terms or not, is, it's jarring to people. It's not how people live. And so it feels this way in which the legal system is kind of divorced from, from our lives. And, and if we get it right, the legal terms and the things that we are agreeing to will be things that we actually agree to, not things that are stuffed into a document that we never read or we really realistically can't read.

ALICE MARWICK
Yeah, I would love it if the terms of service was an actual contract and I could sit there and be like, all right, Facebook, if you want my business, this is what you have to do for me. And make some poor entry level employees sit there and go through all my ridiculous demands. Like, sure, you want it to be a contract, then I'm gonna be an equal participant.

CINDY COHN
You want those green m and ms in the green room?

ALICE MARWICK
Yeah, I want, I want different content moderation standards. I want a pony, I want glittery gifs on every page. You know, give it all to me.

CINDY COHN
Yeah. I mean, you know, there's a, there's a way in which a piece of the fed-averse strategy that I think, uh, we're kind of at the beginning of, uh, perhaps, uh, in this moment is, um, is that a little bit, you have a smaller community, you have people who run the servers, um, who you can actually interact with.

I mean, I don't know that, again, I don't know that there's ponies, but, um, but you know, one of the things that will help get us there is smaller, right? We can't do content moderation at scale. Um, and we can't do, you know, contractual negotiations at scale. So smaller might be helpful and I don't think it's gonna solve all the problems.

I'm, you know, but I think that there, there's a way in which you can at least get your arms around the problem. If you're dealing with a smaller community that then can inter, inter-operate with other communities, but isn't beholden to them with one rule to rule them all.

ALICE MARWICK
Yeah, I mean, I think the biggest problem right now is we need to get around usability and ux and these platforms need to be just as easy to use as like the easiest social platform. You know, it needs to be something that if you aren't, you know, if you don't have a college education, if you're not super techy, if you aren't familiar with, you know, if you're only familiar with very popular social media platforms, you still be, are able to use things like Mastodon.

I don't think we're quite there yet, but I can see a future in which we get there.

CINDY COHN
Well thank you so much for continuing to do this work.

ALICE MARWICK
Oh, thank you. Thank you, Cindy. Thank you, Jason. It was great to chat today.

JASON KELLEY
I'm so glad we got to talk to Alice. That was a really fun conversation and one that I think really underscored a point that I've noticed, um, which is that over the last, I don't know, many years we've seen Congress and other legislators try to tackle these two separate issues that we talked with Alice about.

One being sort of like content on the internet and the other being privacy on the internet. And when we spoke with her about privacy, it was clear that there are a lot. Obvious and simple and direct solutions to kind of informing how we can make privacy on the internet something that actually exists compared to content, which is a much stickier issue.

And, and it's, it's interesting that Congress and other legislators have consistently focused on one of these two topics, or let's say both of them at the expense of, of the one that actually is fairly direct when it comes to solutions. That really sticks out for me, but I'm, I'm wondering, I've blathered on, what do you find  most interesting about what we talked with her about? There was a lot there.

CINDY COHN
Well, I think that Alice does a great service to all of us by pointing out all the ways in which the kind of easy solutions that we reach to, especially around misinformation and disinformation and easy stories we tell ourselves are not easy at all and not empirically supported. So I think one of the things she does is just shine a light on the difference between the kind of stories we tell ourselves about how we could fix some of these problems and the actual empirical evidence about whether those things will work or not.

The other thing that I appreciated is she kind of pointed to spaces on the internet where things are kind of fixed. She talked about Reddit, she talked about some of the fan fiction places she talked about. Facebook groups and pointing out that, you know, sometimes we can be overly focused on politics and the darker pieces of the internet, and that these places that are supportive and loving and good communities that are doing the right thing, they already exist.

We don't have to create them, we just have to find a way to foster them, um, and build more of them. Make the, make more of the internet. That experience. But it, it's, it's refreshing to realize that, you know, Massive pieces of the internet were never broken, um, and don't need to be fixed.

JASON KELLEY
That is 100% right. We're sort of tilted, I think, to focus on the worst things, which is part of our job at EFF. But it's nice when someone says, you know, there are actually good things. And it reminds us that a lot of, in a lot of ways it's working and we can make it better by focusing on what's working.

Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member, donate, or look at hoodies, tshirts, hats and other merch, just in case you feel the need to represent your favorite podcast and your favorite digital rights organization.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time in two weeks

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.
MUSIC CREDIT ANNOUNCER
This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:
Probably Shouldn’t by J.Lang featuring Mr_Yesterday

CommonGround by airtone featuring: simonlittlefield

Additional beds and alternate theme remixes by Gaëtan Harris

Vote for EFF’s 'How to Fix the Internet’ Podcast in the Signal Awards!

Par : Josh Richman
2 octobre 2024 à 17:11

We’re thrilled to announce that EFF’s “How to Fix the Internet” podcast is a finalist in the Signal Awards 3rd Annual Listener's Choice competition. Now we need your vote to put us over the top!

Vote now!

We’re barraged by dystopian stories about technology’s impact on our lives and our futures — from tracking-based surveillance capitalism to the dominance of a few large platforms choking innovation to the growing pressure by authoritarian governments to control what we see and say. The landscape can feel bleak. Exposing and articulating these problems is important, but so is envisioning and then building a better future.

That’s where our podcast comes in. Through curious conversations with some of the leading minds in law and technology, “How to Fix the Internet” explores creative solutions to some of today’s biggest tech challenges.  

Over our five seasons, we’ve had well-known, mainstream names like Marc Maron to discuss patent trolls, Adam Savage to discuss the rights to tinker and repair, Dave Eggers to discuss when to set technology aside, and U.S. Sen. Ron Wyden, D-OR, to discuss how Congress can foster an internet that benefits everyone. But we’ve also had lesser-known names who do vital, thought-provoking work – Taiwan’s then-Minister of Digital Affairs Audrey Tang discussed seeing democracy as a kind of open-source social technology, Alice Marwick discussed the spread of conspiracy theories and disinformation, Catherine Bracy discussed getting tech companies to support (not exploit) the communities they call home, and Chancey Fleet discussing the need to include people with disabilities in every step of tech development and deployment.  

 That’s just a taste. If you haven’t checked us out before, listen today to become deeply informed on vital technology issues and join the movement working to build a better technological future. 

 And if you’ve liked what you’ve heard, please throw us a vote in the Signal Awards competition! 

Vote Now!

Our deepest thanks to all our brilliant guests, and to the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology, without whom this podcast would not be possible. 

Electronic Frontier Foundation to Present Annual EFF Awards to Carolina Botero, Connecting Humanity, and 404 Media

Par : Josh Richman
25 juillet 2024 à 10:20
2024 Awards Will Be Presented in a Live Ceremony Thursday, Sept. 12 in San Francisco

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Carolina Botero, Connecting Humanity, and 404 Media will receive the 2024 EFF Awards for their vital work in ensuring that technology supports freedom, justice, and innovation for all people.  

The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law. 

The EFF Awards ceremony will start at 6:30 pm PT on Thursday, Sept. 12, 2024 at the Golden Gate Club, 135 Fisher Loop in San Francisco’s Presidio. Guests can register at https://www.eff.org/event/eff-awards-2024. The ceremony will be livestreamed and recorded. 

For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential. 

“Maintaining internet access in a conflict zone, conducting fearless investigative reporting on how tech impacts our lives, and bringing the fight for digital rights and social justice to significant portions of Latin America are all ways of ensuring technology advances us all,” EFF Executive Director Cindy Cohn said. “This year’s EFF Award winners embody the internet’s highest ideals, building a better-connected and better-informed world that brings freedom, justice, and innovation for everyone. We hope that by recognizing them in this small way, we can shine a spotlight that helps them continue and even expand their important work.” 

Carolina Botero: Fostering Digital Human Rights in Latin America 

Carolina Botero is a researcher, lecturer, writer, and consultant who is among the foremost leaders in the fight for digital rights in Latin America. In more than a decade as executive director of the Colombia-based Karisma Foundation — founded in 2003 to ensure that digital technologies protect and advance fundamental human rights and promote social justice — she transformed the organization into an outspoken voice fostering freedom of expression, privacy, access to knowledge, justice, and self-determination in our digital world, with regional and international impact. She left that position this year, opening the door for a new generation while leaving a strong and inspiring legacy for those in Latin America and beyond who advocate for a digital world that enhances rights and empowers the powerless. Botero holds a master’s degree in international law and cooperation from Belgium’s Vrije Universiteit Brussel and a master’s degree in commercial and contracting law from Spain’s Universitat Autònoma de Barcelona. She frequently authors op-eds for Colombia’s El Espectador and La Silla Vacía, and serves on the advisory board of The Regional Center for Studies for the Development of the Information Society (Cetic.br), monitoring the adoption of information and communication technologies in Brazil. She previously served on the board of Creative Commons and as a member of the UNESCO Advisory Committee on Open Science.  

Connecting Humanity: Championing Internet Access in Gaza 

Connecting Humanity is a Cairo-based nonprofit organization that helps Palestinians in Gaza regain access to the internet – a crucial avenue for free speech and the free press. Founded in late 2023 by Egyptian journalist, writer, podcaster, and activist Mirna El Helbawi, Connecting Humanity collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. Connecting Humanity has collected hundreds of thousands of eSims from around the world and distributed them to people in Gaza, providing a lifeline for many caught up in Israel’s war on Hamas. People in crisis zones rely upon the free flow of information to survive, and restoring internet access in places where other communications infrastructure has been destroyed helps with dissemination of life-saving information and distribution of humanitarian aid, ensures that everyone’s stories can be heard, and enables continued educational and cultural contact. El Helbawi previously worked as an editor at 7 Ayam Magazine and as a radio host at Egypt’s NRJ Group; she was shortlisted for the Arab Journalism Award in 2016, and she created the podcast Helbing. 

404 Media: Fearless Journalism 

As the media landscape in general and tech media in particular keeps shrinking, 404 Media — launched in August 2023 — has tirelessly forged ahead with incisive investigative reports, deep-dive features, blogs, and scoops about topics such as hacking, cybersecurity, cybercrime, sex, artificial intelligence, consumer rights, government and law enforcement surveillance, privacy, and the democratization of the internet. Co-founders Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg all worked together at Vice Media’s Motherboard, but after that site's parent company filed for bankruptcy in May 2023, the four journalists resolved to go out on their own and build what Maiberg has called "very much a website by humans, for humans about technology. It’s not about the business of technology — it’s about how it impacts real people in the real world.” Among many examples, 404 Media has uncovered a privacy issue in the New York subway system that let stalkers track peoples’ movements, causing the MTA to shut down the feature; investigated a platform being used to generate non-consensual pornography with AI, causing the platform to make changes limiting abuse; and reported on dangerously inaccurate AI-generated books that Amazon then removed from sale. 

 To register for this event: https://www.eff.org/event/eff-awards-2024 

For past honorees: https://www.eff.org/awards/past-winners 

 

Journalists Sue Massachusetts TV Corporation Over Bogus YouTube Takedown Demands

Par : Josh Richman
24 juillet 2024 à 20:17
Posting Video Clips of Government Meetings Is Fair Use That Doesn’t Violate the DMCA, EFF’s Clients Argue

BOSTONA citizen journalists’ group represented by the Electronic Frontier Foundation (EFF) filed a federal lawsuit today against a Massachusetts community-access television company for falsely convincing YouTube to take down video clips of city government meetings.

The lawsuit was filed in the U.S. District Court for Massachusetts by Channel 781, an association of citizen journalists founded in 2021 to report on Waltham, MA, municipal affairs via its YouTube channel. The Waltham Community Access Corp.’s misrepresentation of copyright claims under the Digital Millennium Copyright Act (DMCA) led YouTube to temporarily deactivate Channel 781, making its work disappear from the internet last September just five days before an important municipal election, the suit says. 

“WCAC knew it had no right to stop people from using video recordings of public meetings, but asked YouTube to shut us down anyway,” Channel 781 cofounder Josh Kastorf said. “Democracy relies on an informed public, and there must be consequences for anyone who abuses the DMCA to silence journalists and cut off people’s access to government.” 

Channel 781 is a nonprofit, volunteer-run effort, and all of its content is available for free. Its posts include videos of its members reporting on news affecting the city, editorial statements, discussions in a talk-show format, and interviews. It also posts short video excerpts of meetings of the Waltham city council and other local government bodies. 

Waltham Community Access Corp. (WCAC) operates two cable television channels:  WCAC-TV is a Community Access station that provides programming geared towards the interests of local residents, businesses, and organizations, and MAC-TV is a Government Access station that provides coverage of municipal meetings, events, and special government-related programming. 

Some city meeting video clips that Channel 781 posted to YouTube were short excerpts from videos recorded by WCAC and first posted to WCAC’s website. Channel 781 posted them on YouTube to highlight newsworthy statements by city officials, to provoke discussion and debate, and to make the information more accessible to the public, including to people with disabilities. 

The DMCA notice and takedown process lets copyright holders ask websites to take down user-uploaded material that infringes their copyrights. Although Kastorf had explained to WCAC’s executive director that Channel 781’s use of the government meeting clips was a fair use under copyright law, WCAC sent three copyright infringement notices to YouTube referencing 15 specific Channel 781 videos, leading YouTube to deactivate the account and render all of its content inaccessible. YouTube didn’t restore access to the videos until two months later, after a lengthy intervention by EFF. 

The lawsuitwhich seeks damages and injunctive reliefsays WCAC knew, should have known, or failed to consider that the government meeting clips were a fair use of copyrighted material, and so it acted in bad faith when it sent the infringement notices to YouTube. 

“Nobody can use copyright to limit access to videos of public meetings, and those who make bogus claims in order to stifle critical reporting must be held accountable,” said EFF Intellectual Property Litigation Director Mitch Stoltz. “Phony copyright claims must never subvert the public’s right to know, and to report on, what government is doing.” 

For the complaint: https://www.eff.org/document/07-24-2024-channel-781-news-v-waltham-community-access-corporation-complaint

For more on the DMCA: https://www.eff.org/issues/dmca  

For EFF’s Takedown Hall of Shame: https://www.eff.org/takedowns

Contact: 
Mitch
Stoltz
IP Litigation Director

Podcast Episode: Fighting Enshittification

Par : Josh Richman
2 juillet 2024 à 03:06

The early internet had a lot of “technological self-determination" — you could opt out of things, protect your privacy, control your experience. The problem was that it took a fair amount of technical skill to exercise that self-determination. But what if it didn’t? What if the benefits of online privacy, security, interoperability, and free speech were more evenly distributed among all internet users?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that award-winning author and EFF Special Advisor Cory Doctorow wants us to fight for. His term “enshittification” — a downward spiral in which online platforms trap users and business customers alike, treating them more and more like commodities while providing less and less value — was selected by the American Dialect Society as its 2023 Word of the Year. But, he tells EFF’s Cindy Cohn and Jason Kelley, enshittification analysis also identifies the forces that used to make companies treat us better, helping us find ways to break the cycle and climb toward a better future. 

In this episode you’ll learn about: 

  • Why “intellectual property” is a misnomer, and how the law has been abused to eliminate protections for society 
  • How the tech sector’s consolidation into a single lobbying voice helped bulldoze the measures that used to check companies’ worst impulses 
  • Why recent antitrust actions provide a glimmer of hope that megacompanies can still be forced to do better for users 
  • Why tech workers’ labor rights are important to the fight for a better internet 
  • How legislative and legal losses can still be opportunities for future change 

Cory Doctorow is an award-winning science fiction author, activist, journalist and blogger, and a Special Advisor to EFF. He is the editor of Pluralistic and the author of novels including “The Bezzle” (2024), “The Lost Cause” (2023), “Attack Surface” (2020), and “Walkaway” (2017); young adult novels including “Homeland” (2013) and “Little Brother” (2008); and nonfiction books including “The Internet Con: How to Seize the Means of Computation” (2023) and “How to Destroy Surveillance Capitalism” (2021). He is EFF's former European director and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

CORY DOCTOROW
So interop, you know, it's the idea that you don't need to buy your washing machine from the same people who sold you your clothes. You can use anyone's washing soap in that washing machine. Your dishes go in, in any dishwasher. Anyone's gas or electricity go into your car, you can bring your luggage onto any airline.
You know, there's just this kind of general presumption that things work together and sometimes that's just a kind of happy accident or a convergence where, you know, the airlines basically all said, okay, if it's bigger than seventy-two centimeters, we're probably gonna charge you an extra fee. And the luggage makers all made their luggage smaller than seventy-two centimeters, or you know, what a carry-on constitutes or whatever. Sometimes it's very formal, right? Sometimes like you go to a standards body and you're like, this is the threading gauge and size of a standard light bulb. And that means that every light bulb that you buy is gonna fit into every light bulb socket.
And you don't have to like read the fine print on the light bulb to find out if you've bought a compatible light bulb. And, sometimes it's adversarial. Sometimes the manufacturer doesn't want you to do it, right? Like, so HP wants you to spend something like $10,000 a gallon on printer ink and most of us don't want to spend $10,000 a gallon on printer ink and so out there are some people who figured out how HP printers ask a cartridge, ‘Hey, are you a cartridge that came from HP?’.
And they figured out how to get cartridges that aren't made by HP to say ‘Why yes, I am. And you know, it's not like the person buying the cartridge is confused about this. They are specifically like typing into a search engine, ‘How do I avoid paying HP $10,000 a gallon?’

CINDY COHN
That's Cory Doctorow. He's talking about all the places in our lives where, whether we call it that or not, we get to enjoy the power of interoperability.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
We spend a lot of time here at EFF warning about the things that could go wrong online -- and then of course jumping into the fray when they do go wrong. But on this show we're trying to envision what the world looks like if we start to get things right.

JASON KELLEY
Our guest today is Cory Doctorow. He is one of the world’s leading public thinkers about the digital world, as well as an author and activist. He writes both fiction and non fiction that has more ideas per page than anyone else we know.

CINDY COHN
We’re lucky enough that he’s been one of our colleagues at EFF for over 20 years and he’s one of my dearest friends. We had Cory on the podcast during our first season. I think he was our very first guest - but we thought it was time to check in again. And that’s not only because he’s so much fun to talk to, but also because the central idea he has championed for addressing the problems of platform monopolies – an idea called interoperability which we also call competitive compatibility – it’s started to get real traction in policy spaces both in the US and in Europe.
I quote Cory a lot on this show, like the idea that we don't want to go back to the good old days. We're trying to create the good new days. So I thought that it was a good place to start. What do the good new days look like in the Coryverse?

CORY DOCTOROW
So the old good internet was characterized by a very high degree of what I call like technological self-determination. Just the right to just decide how the digital tools you use work.
The problem was that it also required a high degree of technical skill. There are exceptions right. I think ad blockers are kind of our canonical exception for, you know, describing what a low-skill, high-impact element of technological self-determination is. Like more than half of all web users now run ad blockers. Doc Searls calls it the largest consumer boycott in human history.
And you don't have to be a brain surgeon or a hacker to install an ad blocker. It's just like a couple of clicks and away you go. And I think that a new good internet is one in which the benefits of technological self-determination, all the things you get beyond an ad blocker, like, you know, I'm speaking to you from a household that's running a pie hole, which is like a specialized data appliance that actually blocks ads in other things like smart TVs and apps and whatever.
I have a personal VPN that I run off my home network so that when I'm roaming - I just got back from Germany and they were blocking the port that I used for my mail server, and I could VPN into my house and get my email as though I were connected via my home - all of those things should just accrue to you with the ease that you get from an ad blocker because we can harness markets and tinkerers and cooperatives and people who aren't just making a thing to scratch their own itch, but are actually really invested in other people who aren't technically sophisticated being able to avail themselves of these tools too. That's the new good internet

CINDY COHN
I love that. I mean, you know, what is it? The future is here. It's just not evenly distributed. You just want to evenly distribute the future, and also make it simpler for folks to use.

CORY DOCTOROW
Yeah. You know, the problem of the old good internet was not the part where skilled technical practitioners didn't have to put up with nonsense from companies that didn't have their best interests at heart. Right?
The problem was that not everybody got that. Well, the good future of the internet is one in which we more evenly distribute those benefits. The bad future of the internet we're living in now is the one in which it's harder and harder, even for skilled practitioners, to enjoy those benefits.

CINDY COHN
And harder for the rest of us to get them, right? I hear two things, both as an end user, my world's gonna have a lot more choices, but good choices about things I can do to protect myself and places I can look for that help. And then as somebody who's a hacker or an innovator, you're gonna have a lot easier way to take your good idea, turn it into something and make it actually work, and then let people find it.

CORY DOCTOROW
And I think it's even more than that, right? Because I think that there's also the kind of incentives effect. You know, I'm not the world's biggest fan of markets as the best way to allocate all of our resources and solve all of our problems. But one thing that people who really believe in markets like to remind us of is that incentives matter.
And there is a kind of equilibrium in the product planning meeting where someone is always saying, ‘If we make it this bad, will someone type into a search engine, ‘How do I unrig this game?’ Because once they do that, then all bets are off, right? Think about again, back to ad blockers, right? If, if someone in the boardroom says, Hey, I've calculated that if we make these ads 20% more invasive we’ll increase our revenue per user by 2%.
Someone else who doesn't care about users necessarily, might say, yeah, but we think 20% of users will type ‘How do I block ads’ into a search engine as a result of this. And the expected revenue from that user doesn't just stay static at what we've got now instead of rising by 2%. The expected revenue from that user falls to zero forever.
We'll never make an advertising dime off of that user once they type ‘How do I block ads’ into a search engine. And so it isn't necessary even that the tools defend you. The fact that the tools might defend you changes the equilibrium, changes the incentives, changes the conduct of firms. And where it fails to do that, it then affords you a remedy.
So it's both belt and suspenders. Plan A and plan B.

JASON KELLEY
It sounds like we're veering happily towards some of the things that you've talked about lately with the term that you coined last year about the current moment in our digital world: Enshittification. I listened to your McLuhan lecture and it brought up a lot of similar points to what you're talking about now. Can you talk about this term? In brief, what does it mean, and, you know, why did the American Dialect Society call it the word of the year?

CORY DOCTOROW
Right. So I mean, the top level version of this is just that tech has these unique, distinctive technical characteristics that allow businesses to harm their stakeholders in ways that are distinct from the ways that other companies can just because like digital has got this flexibility and this fluidity.
And so it sets up this pattern that as the regulation of tech and as the competition for tech and as the force that workers provided as a check on tech's worst, worst impulses have all failed, we've got this dynamic where everything we use as a platform, and every platform is decaying in the same way, where they're shifting value first to users, to trap users inside a walled garden, and then bringing in business customers with the promise of funneling value from those users to those business customers, trapping those business customers, and then once everybody is held hostage, using that flexibility of digital tools to take that value away without releasing the users.
So even though the service is getting worse and worse for you, and it's less and less valuable to you, you still find yourself unable to leave. And you are even being actively harmed by it as the company makes it worse and worse.
And eventually it reaches a breaking point. Eventually things are so bad that we leave. But the problem is that that's like a catastrophic ending. That's the ending that, you know, everybody who loved LiveJournal had. Where they loved LiveJournal and the community really mattered to them.
And eventually they all left, but they didn't all end up in the same place. The community was shattered.
They just ended up fragmented and you can still hear people for whom LiveJournal was really important, saying like, I never got that back. I lost something that mattered to me. And so for me, the Enshittification analysis isn't just about like how do we stop companies from being bad, but it's about how we allow people who are trapped by bad companies to escape without having to give up as much as they have to give up now.

CINDY COHN
Right, and that leads right into adversarial interoperability, which is a term that I think was coined by Seth Schoen, EFF’s original staff technologist. It's an idea that you have really thought about a lot Cory and developed out. We heard you talk at the beginning of the episode, with that example about HP printers.

CORY DOCTOROW
That adversarial interoperability, it's been in our technology story for as long as we've had digital tools, because digital tools have this flexibility we've alluded to already. You know, the only kind of digital computer we can make is the Turing complete von Neumann machine.
It runs every program that's valid and that means that, you know, whenever a manufacturer has added an anti-feature or done something else abusive to their customers, someone else has been able to unlock it.
You know, when IBM was selling mainframes on the cheap and then charging a lot of money for printers and you know, keyboards and whatever, there were these things called plug compatible peripherals.
So, you know these companies they call the Seven Dwarfs, Fujitsu and all these other tech companies that we now think of as giants, they were just cloning IBM peripherals. When Apple wanted to find a way for its users to have a really good experience using Microsoft Office, which Microsoft had very steadfastly refused them and had, uh, made just this unbelievably terrible piece of software called, uh, office for the Mac that just didn't work and had all these compatibility problems, Steve Jobs just had his technologist reverse engineer Office, and they made iWork pages numbers in Keynote.
And it can read and write all the files from Excel, PowerPoint and Word. So this has always been in our story and it has always acted as a hedge on the worst impulses of tech companies.
And where it failed to act as a hedge, it created an escape valve for people who are trapped in those bad impulses. And as tech has become more concentrated, which itself is the result of a policy choice not to enforce antitrust law, which allowed companies to gobble each other up, become very, very concentrated.
It became easier for them to speak with one voice in legislative outlets. You know, when Seth coined the term adversarial interoperability, it was about this conspiracy among the giant entertainment companies to make it illegal to build a computer that they hadn't approved of called the Broadcast Flag.
And the reason the entertainment companies were able to foist this conspiracy on the tech industry, which was even then, between one and two orders of magnitude larger than the entertainment companies, is that the entertainment companies were like seven firms and they spoke with one voice and tech was a rabble.
It was hundreds of companies. We were in those meetings for the broadcast protection discussion group where you saw hundreds of companies at each other's throats not able to speak with one voice. Today, tech speaks with one voice, and they have taken those self-help measures, that adversarial interoperability, that once checked their worst impulses, and they have removed them from us.
And so we get what Jay Freeman calls felony contempt of business model where, you know, the act of reverse engineering a printer cartridge or an office suite or mobile operating system gives rise to both civil and criminal penalties and that means no one invests in it. People who do it take enormous personal risks. There isn't the kind of support chain.
You definitely don't get that kind of thing where it's like, ‘just click this button to install this thing that makes your experience better.’ To the extent that it even exists, it's like, download this mysterious software from the internet. Maybe compile it yourself, then figure out how to get it onto your device.
No one's selling you a dongle in the checkout line at Walmart for 99 cents that just jailbreaks your phone. Instead, it's like becoming initiated into the Masons or something to figure out how to jailbreak your phone.

CINDY COHN
Yes, we managed to free jailbreaking directly through the exceptions process in the DMCA but it hasn’t ended up really helping many people. We got an exception to one part of the law but the very next section prevents most people from getting any real benefit.

CORY DOCTOROW
At the risk of like teaching granny to suck eggs, we know what the deficiency in the, in the exceptions process is, right? I literally just explained this to a fact checker at the Financial Times who's running my Enshittification speech, who's like you said that it's illegal to jailbreak phones, and yet I've just found this process where they made it legal to jailbreak phones and it's like, yeah, the process makes it legal for you to jailbreak your phone. It doesn't make it legal for anyone to give you a tool to jailbreak your phone or for you to ask anyone how that tool should work or compare notes with someone about how that, so you can like, gnaw your own jailbreaking tool out of a whole log in secret, right? Discover the, discover the defect in iOS yourself.
Figure out how to exploit it yourself. Write an alternative version of iOS yourself. And install it on your phone in the privacy of your own home. And provided you never tell anyone what you've done or how you did it, the law will permit you to do this and not send you to prison.
But give anyone any idea how you're doing it, especially in a commercial context where it's, you know, in the checkout aisle at the Walmart for 99 cents, off to prison with you. Five-hundred-thousand-dollar fine and a five-year prison sentence for a first offense for violating Section 12 0 1 of the DMCA in a commercial way. Right? So, yeah, we have these exceptions, but they're mostly ornamental.

CINDY COHN
Well, I mean, I think that that's the, you know, it's profoundly weird, right? This idea that you can do something yourself, but if you help somebody else do it, that's illegal. It's a very strange thing. Of course, EFF is not like the digital Millennium Copyright Act since 1998 when it was passed, or probably 1995 when they started talking about it. But it is a situation in which, you know, we've chipped away at the law, and this is a thing that you've written a lot about. These fights are long fights and we have to figure out how to be in them for the long run and how to claim victory when we get even a small victory. So, you know, maybe this is a situation in which us crowing about some small victories, has led people to be misled about the overarching story which is still one where we've got a lot of work to do.

CORY DOCTOROW
Yeah, and I think that, you know, the way to understand this is as not just the DMCA, but also all the other things that we just colloquially call IP Law that constitute this thing that Jay calls felony contempt of business model. You know, there's this old debate among our tribe that, you know, IP is the wrong term to use. It's not really property. It doesn't crisply articulate a set of policies. Are we talking about trademark and patent and copyright, or do we wanna throw in broadcast rights and database rights and you know, whatever, but I actually think that in a business context, IP means something very, very specific.
When an investor asks a founder, ‘What IP do you have? What they mean is what laws can you invoke that will allow you to exert control over the conduct of your competitors, your critics, and your customers?’ That's what they mean. And oftentimes, each IP law will have an escape valve, like the DMCA's triennial exemptions. But you can layer one in front of the other, in front of the other in order to create something where all of the exemptions are plugged. So, you know, copyright has these exceptions but then you add trademark where like Apple is doing things like engraving nearly invisible apple logos on the components inside of its phones, so that when they're refurbished in the far east and shipped back as parts for independent repair, they ask the customs agency in the US to seize the parts for tarnishment of their trademark because the parts are now of an unknown quality and they bear their logo, which means that it will degrade the public's opinion of the reliability of an Apple product. So, you know, copyright and patent don't stop them from doing this, but we still have this other layer of IP and if you line the layers up in the right way, and this is what smart corporate lawyers do - they know the right pattern to line these different protections up, such that all of the exceptions that we're supposed to provide a public interest, that were supposed to protect us as the users or protect society - each one of those is choked off by another layer.

CINDY COHN
I think that’s one of my biggest frustrations in fixing the internet. We get stuck fighting one fight at a time and just when we pass one roadblock we have to navigate another. In fact, one that we haven’t mentioned yet is contract law, with terms of service and clickwrap license agreements that block innovation and interoperability. It starts to feel more like a game, you know, can our intrepid coder navigate around all the legal hurdles and finally get to the win where they can give us back control over our devices and tools?

CORY DOCTOROW
I mean, this is one of the things that's exciting about the antitrust action that we're getting now, is that I think we're gonna see a lot of companies being bound by obligations whose legitimacy they don't acknowledge and which they are going to flout. And when they do, presuming that the enforcers remain committed to enforcing the law, we are going to have opportunities to say to them, ‘Hey, you're gonna need to enter into a settlement that is gonna restrict your future conduct. You're gonna have to spin off certain things. You're gonna have to allow certain kinds of interop or whatever’.
That we got these spaces opening up. And this is how I think about all of this and it is very game-like, right? We have these turns. We're taking turns, our adversaries are taking turns. And what we want is not just to win ground, but we want to win high ground. We want to win ground from which we have multiple courses of action that are harder to head off. And one of the useful things about the Enshittification analysis is it tries to identify the forces that made companies treat us good. I think sometimes the companies treated us well because the people who ran them were honorable. But also you have to ask how those honorable people resisted their shareholders’ demands to shift value from the firm to their users or the other direction. What forces let them win, you know, in that fight. And if we can identify what forces made companies treat technology users better on the old good internet, then we can try and build up those forces for a new good internet. So, you know, one of the things that I think really helped the old good internet was the paradox of the worker power of the tech worker because tech workers have always been well compensated. They've always had a lot of power to walk out of the job and go across the street and get another job with someone better. Tech Workers had all of this power, which meant that they didn't ever really like form unions. Like tech union density historically has been really low. They haven't had formal power, they've had individual power, and that meant that they typically enjoyed high wages and quite cushy working conditions a lot of the time, right? Like the tech campus with the gourmet chef and the playground and the gym and the sports thing and the bicycles and whatever. But at the same time, this allowed the people they worked for to appeal to a sense of mission among these people. And it was, these were these like non-commercial ethical normative demands on the workforce. And the appeals to those let bosses convince workers to work crazy hours. Right? You know, the extremely hardcore Elon Musk demand that you sleep under your desk, right? This is where it comes from, this sense of mission which meant, for the bosses, that there was this other paradox, which was that if you motivate your workers with a sense of mission, they will feel a sense of mission. And when you say, ‘Hey, this product that you fought really hard for, you have to make worse, right? You've, you know, missed your gallbladder surgery and your mom's funeral and your kid's little league championship to make this product. We want you to stick a bunch of gross ads in it,’ the people who did that job were like, no, I feel a sense of mission. I will quit and walk across the street and get another job somewhere better if this is what you demand of me. One of the constraints that's fallen away is this labor constraint. You know, when Google does a stock buyback and then lays off 12,000 workers within a few months, and the stock buyback would pay their wages for 27 years, like the workers who remain behind get the message that the answer to, no, I refuse to make this product worse is fine, turn in your badge and don't let the door hit you in the ass on the way out. And one of the things we've always had a trade in at EFF is tech workers who really cared about their users. Right? That's been the core of our membership. Those have been the whistleblowers we sometimes hear from. Those have been our clients sometimes. And we often say when companies have their users’ backs, then we have the company's back. If we were to decompose that more fully, I think we would often find that the company that has its users' back really has a critical mass of indispensable employees who have their users’ back, that within the balance of power in the company, it's tipping towards users. And so, you know, in this moment of unprecedented union formation, if not union density, this is an area where, you know, you and I, Cindy have written about this, where, where tech rights can be workers' rights, where bossware can cut against labor rights and interoperable tools that defeat bossware can improve workers’ agency within their workplace, which is good for them, but it's good for the people that they feel responsibility for, the users of the internet.

CINDY COHN
Yeah. I remember in the early days when I first joined EFF and Adobe had had the FBI arrest Dmitri Sklyarov at DefCon because he developed a piece of software that allowed people to copy and move their Adobe eBooks into other formats and platforms. Some of EFF’s leadership went to Adobe’s offices to talk to their leadership and see if we could get them to back off.
I remember being told about the scene because there were a bunch of hackers protesting outside the Adobe building, and they could see Adobe workers watching them from the windows of that building. We knew in that moment that we were winning, that Adobe was gonna back down because their internal conversations were, how come we're the guys who are sending the FBI after a hacker?
We had something similar happen with Apple more recently when Apple announced that it was going to do client side scanning. We knew from the tech workers that we were in contact with inside the company that breaking end-to-end encryption was something that most of the workers didn't approve of. We actually flew a plane over Apple’s headquarters at One Infinite Loop to draw attention to the issue. Now whether it was the plane or not, it wasn't long before Apple backed down because they felt the pressure from inside, as well as outside. I think the tech workers are feeling disempowered right now, and it's important to keep telling these stories and reminding them that they do have power because the first thing that a boss who wants to control you does, is make you think you're all alone and you don't have any power. I appreciate that in the world we’re envisioning where we start to get tech right, we're not just talking about users and what users get. We're talking about what workers and creators and hackers and innovators get, which is much more control and the ability to say no or to say yes to something better than the thing that the company has chosen. I'm interested in continuing to try to tell these stories and have these conversations.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
And now back to our conversation with Cory Doctorow. Cory is well known for his writing and speaking but what some people may not realize is that he is a capital A Activist. I work with him closely on the activism team here at EFF, and I have seen firsthand how sometimes his eyes go red and he will throw everything he has into a fight. So I wanted to get him to talk a bit more about the activism side of his work, and what fuels that.

CORY DOCTOROW
I tried to escape EFF at one point. I actually was like, God, you know, the writing and the activism, I can't do both. I'm just gonna do one. And so I went off to do the writing for a few years, and I got so pissed off with things going wrong in the world that I wasn't actively engaged in trying to fix that I just lost it. And I was like, I, whatever negative effects accrue due to overwork are far less significant to me, both like intellectually and kind of emotionally, than the negative effects I get from feeling hopeless, right, and helpless and sitting on the sidelines while things that are just untenable, go on. And, you know, Cindy said it before, it's a long game, right? The activism game. We are sowing the seeds of a crop that we may never reap. And I am willing to understand and believe and make my peace with the idea that some of the stuff that I'm doing will be victories after I've left the field, right, it'll be for people who haven't even graduated high school yet, let alone going to work for EFF or one of our allies.
And so when I see red, when I get really angry, when I don't know, you know, the the DRM in browsers at the W3C or the European Union trying for, mandatory copyright filters for online services, I think like this is a fight we may not win, but it's a fight that we must fight, right? The stakes are too high not to win it, and if we lose it this time around, we will lay the groundwork for a future victory. We will create the people who are angry that the policy came out this way, who, when some opportunity opens up in the future, because you know these fights that we fight, the side that we're on is the side of producing something good and stable and beneficial. And the thing that we're fighting against has massive downstream harms, whether that's mandatory copyright filters or client-side scanning or breaking end-to-end encryption, right? Like if we lose a breaking end-to-end encryption fight, what we have lost is the safety of millions of people in whatever country that rule has been enacted, and that means that in a way that is absolutely deplorable and that the architects of these policies should be ashamed of, some of those people are gonna come to the most terrible harms in the future. And the thing that we should be doing because we have lost the fight to stop those harms from occurring, is be ready to when those harms occur, to be able to step in and say, not just we told you so, but here's how we fix it. Here's the thing that we're going to do to turn this crisis into the opportunity to precipitate a change.

JASON KELLEY
Yeah, that's right. Something that has always pleased me is when we have a guest here on the podcast and we’ve had many, who have talked about the blue ribbon campaign. And it’s clear that, you know, we won that fight, but years and years ago, we put together this coalition of people, maybe unintentionally, that still are with us today. And it is nice to imagine that, with the wins and the losses, we gain bigger numbers as we lay that groundwork.

CINDY COHN
And I think there is something also fun about trying to build the better world, being the good guys. I think there is something powerful about that. The fights are long, they're hard. I always say that, you know, the good guys throw better parties. And so on the one hand it's, yes, it's the anger; your eyes see red, we have to stop this bad thing from happening. But the other thing is that the other people who are joining with you in the fight are really good people to hang out with. And so I guess I, I wanna make sure that we're talking about both sides of a kind of activist life because they're both important. And if it wasn't for the fun part - fun when you win - sometimes a little gallows humor when you don't, that's as important as the anger side because if you're gonna be in it for the long run you can't just run on, you know, red-eyed anger alone.

CORY DOCTOROW
You know, I have this great laptop from this company Framework. I promise you this goes somewhere that, uh, is a user serviceable laptop. So it comes with a screwdriver. Even someone who's really klutzy like me can fix their laptop. And, uh, I drop my laptops all the time - and the screws had started coming loose on the bottom, and they were like, ‘hey, this sounds like a thing that we didn't anticipate when we designed it. Why don't we ship you a free one and you ship us back the broken one, we can analyze it for future ones’. So, I just did this, I swapped out the bottom cover of my laptop at the weekend, which meant that I had a new sticker surface for my laptop. And I found a save.org ‘some things are not for sale’ sticker, which was, you know, this incredible campaign that we ran with our lost and beloved colleague Elliot and putting that sticker on felt so good. You know, it was just like, yeah, this is, this is like a, this is like a head on a pike for me. This is great.

CINDY COHN
And for those who may not have followed that, just at the beginning of Covid actually, there was an effort by private equity to buy the control of the .org domain, which of course means EFF.org, but it means every other nonprofit. And we marshaled a tremendous coalition of nonprofits and others to essentially, you know, make the deal not happen. And save.org for, you know, the.orgs. And as Cory mentioned, our dear friend Elliot who was our activism director at the time, that was his last campaign before he got sick. And, we did, we, we won. We saved.org. Now that fight continues. Uh, things are not all perfect in .org land, but we did head that one off and that included a very funky, nerdy protest in front of an ICANN meeting that, uh, that a bunch of people came to.

CORY DOCTOROW
Top level domains still a dumpster fire. In other words, in other news, water's still wet. You know, the thing about that campaign that was so great, is it was one where we didn't have a path to victory. We didn't have a legal leg to stand on. The organization was just like operating in its own kind of bubble where it was fully insulated from, you know, formally, at least on paper, insulated from public opinion, from stakeholder opinions. It just got to do whatever it wanted. And we just like kind of threw everything at it. We tried all kinds of different tactics and cumulatively they worked and there were weird things that came in at the end. Like Xavier Becerra, who is then the Attorney General of California going like, well, you're kind of, you're a California nonprofit. Like, I think maybe we're gonna wanna look at this.
And then all of a sudden everyone was just like, no, no, no, no, no. But you know, it wasn't like Becerra saved it, right? It was like we built up the political pressure that caused the Attorney General of California who's got a thing or two on his plate, to kind of get up on his hind legs and go, ‘Hey, wait a second. What's going on here?’
And there've been so many fights like that over the years. You know, this is, this is the broadcast treaty at the UN. I remember when we went, our then colleague, Fred von Lohmann was like, ‘I know how to litigate in the United States 'cause we have like constitutional rights in the United States. The UN is not going to let NGOs set the agenda or sue. You can't force them to give you time.’ You know, it's like you have all the cards stacked against you there but we killed the broadcast flag and we did it like by being digitally connected with activists all over the world that allowed us to exploit the flexibility of digital tools to have a fluid improvisational style that allowed us at each turn to improvise in the moment, new tactics that went around the roadblocks that were put in our way. And some of them were surreal, like our handouts were being stolen and hidden in the toilets. Uh, but you know, it was a very weird fight.
And we trounced the most powerful corporations in the world in a forum that was completely stacked against us. And you know, that's the activist joy here too, right? It's like you go into these fights with the odds stacked against you. You never know whether or not there is a lurking potential for a novel tactic that your adversary is completely undefended on, where you can score really big, hard-to-anticipate wins. And I think of this as being related to a theory of change that I often discuss when people ask me about optimism and pessimism.
Because I don't like optimism and pessimism. I think they're both a form of fatalism. That optimism and pessimism are both the idea that the outcome of events are unrelated to human conduct, right? Things will get worse or things will get better. You just sit on the sidelines. It's coming either way. The future is a streetcar on tracks and it's going where it's going.
But I think that hope is this idea that if you're like, trying to get somewhere and you don't know how to get there, you're trying to ascend a gradient towards a better future - if you ascend that gradient to the extent that you can see the path from where you are now, that you can attain a vantage point from which you can see new possible ways of going that were obscured from where you were before, that doing something changes the landscape, changes where you're situated and may reveal something else you can do.

CINDY COHN
Oh, that's such a lovely place to end. Thank you so much, Cory, for taking time to talk with us. We're gonna keep walking that path, and we're gonna keep looking for the little edges and corners and ways, you know, that we can continue to push forward the better internet because we all deserve it.

JASON KELLEY
Thanks, Cory. It's really nice to talk to you.

CORY DOCTOROW
Oh, it was my pleasure.

JASON KELLEY
You know, I get a chance to talk to Cory more often than most people, and I'm still just overjoyed when it gets to happen. What did you think of that conversation, Cindy?

CINDY COHN
What I really liked about it is that he really grounds, you know, what could be otherwise, a kind of wonky thing - adversarial interoperability or competitive compatibility - in a list of very concrete things that have happened in the past and not the far past, the fairly recent past. And so, you know, building a better future really means just bringing some of the tools to bear that we've already brought to bear in other situations, just to our new kind of platform Enshittification world. Um, and I think it makes it feel much more doable than something that might be, you know, a pie in the sky. And then we all go to Mars and everything gets better.

JASON KELLEY
Yeah. You know, he's really good at saying, here's how we can learn from what we actually got right in the past. And that's something people don't often do in this, in this field. It's often trying to learn from what we got wrong. And the part of the conversation that I loved was just hearing him talk about how he got back into doing the work. You know, he said he wanted to do writing or activism, because he was just doing too much, but in reality, he couldn't do just one of the two because he cares so much about what's going on. It reminded me when he was saying, sort of, what gets his eyes to turn red of when we were speaking with Gaye Gordon-Byrne, about right to repair and how she had been retired and just decided after getting pulled back in again and again just to go wholly committed to to fighting for the right to repair after, you know that quote from The Godfather about being continually pulled back in. This is Cory and, and people like him, I think, to a tee.

CINDY COHN
Yeah, I think so too. That reminded me of what, what she said. And of course I was on the other side of it. I was one of the people that Cory was pinging over and over again.

JASON KELLEY
So you pulled him back in.

CINDY COHN
Well, I think he pulled himself back in. I was just standing there. Um, but, but it is, it is fun to watch somebody feel their passion grow so much that they just have to get back into the fight. And I think Gay really told that same trajectory of how, you know, sometimes something just bugs you enough that you decide, look, I gotta figure out how to get into this fight and, and, and make things better.

JASON KELLEY
And hopefully people listening will have that same feeling. And I know that, you know, many of our supporters do already.
Thanks for joining us for this episode of How to Fix the Internet. If you have any feedback or suggestions, we would be happy to hear from you. Visit EFF. org slash podcast and click on listener feedback. And while you're there, maybe you could become an EFF member and maybe you could pick up some merch. We've got very good t-shirts. Or you can just peruse to see what's happening in digital rights this week and every week. This podcast is licensed Creative Commons attribution. 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode, you heard Xena's Kiss slash Madea's Kiss by M. Wick, Probably Shouldn't by J. Lang featuring Mr. Yesterday, Come Inside by Zepp Herm featuring Snowflake, and Drops of H2O the Filtered Water Treatment by J. Lang featuring Airtone. Our theme music is by Nat Keefe of Beatmower with Reed Mathis. How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I hope you'll join us again. I'm Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Government Has Extremely Heavy Burden to Justify TikTok Ban, EFF Tells Appeals Court

Par : Josh Richman
27 juin 2024 à 10:17
New Law Subject to Strictest Scrutiny Because It Imposes Prior Restraint, Directly Restricts Free Speech, and Singles Out One Platform for Prohibition, Brief Argues

SAN FRANCISCO — The federal ban on TikTok must be put under the finest judicial microscope to determine its constitutionality, the Electronic Frontier Foundation (EFF) and others argued in a friend-of-the-court brief filed Wednesday to the U.S. Court of Appeals for the D.C. Circuit. 

The amicus brief says the Court must review the Protecting Americans from Foreign Adversary Controlled Applications Act — passed by Congress and signed by President Biden in April — with the most demanding legal scrutiny because it imposes a prior restraint that would make it impossible for users to speak, access information, and associate through TikTok. It also directly restricts protected speech and association, and deliberately singles out a particular medium for a blanket prohibition. This demanding First Amendment test must be used even when the government asserts national security concerns. 

The Court should see this law for what it is: “a sweeping ban on free expression that triggers the most exacting scrutiny under the First Amendment,” the brief argues, adding it will be extremely difficult for the government to justify this total ban. 

Joining EFF in this amicus brief are the Freedom of the Press Foundation, TechFreedom, Media Law Resource Center, Center for Democracy and Technology, First Amendment Coalition, and Freedom to Read Foundation. 

TikTok hosts a wide universe of expressive content from musical performances and comedy to politics and current events, the brief notes, and with more than 150 million users in the United States and 1.6 billion users worldwide, the platform hosts enormous national and international communities that most U.S. users cannot readily reach elsewhere. It plays an especially important and outsized role for minority communities seeking to foster solidarity online and to highlight issues vital to them. 

“The First Amendment protects not only TikTok’s US users, but TikTok itself, which posts its own content and makes editorial decisions about what user content to carry and how to curate it for each individual user,” the brief argues.  

Congress’s content-based justifications for the ban make it clear that the government is targeting TikTok because it finds speech that Americans receive from it to be harmful, and simply invoking national security without clearly demonstrating a threat doesn’t overcome the ban’s unconstitutionality, the brief argues. 

“Millions of Americans use TikTok every day to share and receive ideas, information, opinions, and entertainment from other users around the world lies, and that’s squarely within the protections of the First Amendment,” EFF Civil Liberties Director David Greene said. “By barring all speech on the platform before it can happen, the law effects the kind of prior restraint that the Supreme Court has rejected for the past century as unconstitutional in all but the rarest cases.” 

For the brief: https://www.eff.org/document/06-26-2024-eff-et-al-amicus-brief-tiktok-v-garland

For EFF’s stance on TikTok bans: https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban 

Contact: 
David
Greene
Civil Liberties Director

EFF Welcomes Tarah Wheeler to Its Board of Directors

Par : Josh Richman
25 juin 2024 à 17:42
Wheeler Brings Perspectives on Information Security and International Conflict to the Board of Directors

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce today that Tarah Wheeler — a social scientist studying international conflict, an author, and a poker player who is CEO of the cybersecurity compliance company Red Queen Dynamics — has joined EFF’s Board of Directors. 

Wheeler has served on EFF’s advisory board since June 2020. She is the Senior Fellow for Global Cyber Policy at Council on Foreign Relations and was elected to Life Membership at CFR in 2023. She is an inaugural contributing cybersecurity expert for the Washington Post, and a Foreign Policy contributor on cyber warfare. She is the author of the best-selling “Women In Tech: Take Your Career to The Next Level With Practical Advice And Inspiring Stories” (2016). 

“I am very excited to have Tarah bring her judgment, her technical expertise and her enthusiasm to EFF’s Board,” EFF Executive Director Cindy Cohn said. “She has supported us in many ways before now, including creating and hosting the ‘Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON,’ which will have its third year this summer. Now we get to have her in a governance role as well.” 

"I am deeply honored to join the Board of Directors at the Electronic Frontier Foundation,” Wheeler said. “EFF's mission to defend civil liberties in the digital world is more critical than ever, and I am humbled to be invited to serve in this work. EFF has been there for me and other information security researchers when we needed a champion the most. Together, we will continue to fight for the rights and freedoms that ensure a free and open internet for all." 

Wheeler has been a US/UK Fulbright Scholar in Cyber Security and Fulbright Visiting Scholar at the Centre for the Resolution of Intractable Conflict at the University of Oxford, the Brookings Institution’s contributing cybersecurity editor, a Cyber Project Fellow at the Belfer Center for Science and International Affairs at Harvard University‘s Kennedy School of Government, and an International Security Fellow at New America leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative. She has been Head of Offensive Security & Technical Data Privacy at Splunk & Senior Director of Engineering and Principal Security Advocate at Symantec Website Security. She has led projects at Microsoft Game Studios (Halo and Lips) and architected systems at encrypted mobile communications firm Silent Circle. She has two cashes and $4,722 in lifetime earnings in the World Series of Poker. 

Members of the Board of Directors ensure EFF’s sustainability by adopting sound, ethical, and legal governance and financial management policies so that the organization has adequate resources to advance its mission.  

Shari Steele — who had been on EFF’s Board since 2015 when she ceased being EFF’s Executive Director — has rotated off the Board. Gigi Sohn has been elected Chair of the Board. 

For the full roster of EFF’s Board of Directors: https://www.eff.org/about/board

Podcast Episode: AI in Kitopia

Par : Josh Richman
18 juin 2024 à 03:05

Artificial intelligence will neither solve all our problems nor likely destroy the world, but it could help make our lives better if it’s both transparent enough for everyone to understand and available for everyone to use in ways that augment us and advance our goals — not for corporations or government to extract something from us and exert power over us. Imagine a future, for example, in which AI is a readily available tool for helping people communicate across language barriers, or for helping vision- or hearing-impaired people connect better with the world.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is the future that Kit Walsh, EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects, and EFF Senior Staff Technologist Jacob Hoffman-Andrews, are working to bring about. They join EFF’s Cindy Cohn and Jason Kelley to discuss how AI shouldn’t be a tool to cash in, or to classify people for favor or disfavor, but instead to engage with technology and information in ways that advance us all. 

In this episode you’ll learn about: 

  • The dangers in using AI to determine who law enforcement investigates, who gets housing or mortgages, who gets jobs, and other decisions that affect people’s lives and freedoms. 
  • How "moral crumple zones” in technological systems can divert responsibility and accountability from those deploying the tech. 
  • Why transparency and openness of AI systems — including training AI on consensually obtained, publicly visible data — is so important to ensure systems are developed without bias and to everyone’s benefit. 
  • Why “watermarking” probably isn’t a solution to AI-generated disinformation. 

Kit Walsh is a senior staff attorney at EFF, serving as Director of Artificial Intelligence & Access to Knowledge Legal Projects. She has worked for years on issues of free speech, net neutrality, copyright, coders' rights, and other issues that relate to freedom of expression and access to knowledge, supporting the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Before joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard University's Berkman Klein Center for Internet and Society; earlier, she worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria. 

Jacob Hoffman-Andrews is a senior staff technologist at EFF, where he is lead developer on Let's Encrypt, the free and automated Certificate Authority; he also works on EFF's Encrypt the Web initiative and helps maintain the HTTPS Everywhere browser extension. Before working at EFF, Jacob was on Twitter's anti-spam and security teams. On the security team, he implemented HTTPS-by-default with forward secrecy, key pinning, HSTS, and CSP; on the anti-spam team, he deployed new machine-learned models to detect and block spam in real-time. Earlier, he worked on Google’s maps, transit, and shopping teams.

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KIT WALSH
Contrary to some marketing claims, AI is not the solution to all of our problems. So I'm just going to talk about how AI exists in Kitopia. And in particular, the technology is available for everyone to understand. It is available for everyone to use in ways that advance their own values rather than hard coded to advance the values of the people who are providing it to you and trying to extract something from you and as opposed to embodying the values of a powerful organization, public or private, that wants to exert more power over you by virtue of automating its decisions.
So it can make more decisions classifying people, figuring out whom to favor, whom to disfavor. I'm defining Kitopia a little bit in terms of what it's not, but to get back to the positive vision, you have this intellectual commons of research development of data that we haven't really touched on privacy yet, but but data that is sourced in a consensual way and when it's, essentially, one of the things that I would love to have is a little AI muse that actually does embody my values and amplifies my ability to engage with technology and information on the Internet in a way that doesn't feel icky or oppressive and I don't have that in the world yet.

CINDY COHN
That’s Kit Walsh, describing an ideal world she calls “Kitopia”. Kit is a senior staff attorney at the Electronic Frontier Foundation. She works on free speech, net neutrality and copyright and many other issues related to freedom of expression and access to knowledge. In fact, her full title is EFF’s Director of Artificial Intelligence & Access to Knowledge Legal Projects. So, where is Kitopia, you might ask? Well we can’t get there from here - yet. Because it doesn’t exist. Yet. But here at EFF we like to imagine what a better online world would look like, and how we will get there and today we’re joined by Kit and by EFF’s Senior Staff Technologist Jacob Hoffman-Andrews. In addition to working on AI with us, Jacob is a lead developer on Let's Encrypt, and his work on that project has been instrumental in helping us encrypt the entire web. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

JACOB HOFFMAN-ANDREWS
I think in my ideal world people are more able to communicate with each other across language barriers, you know, automatic translation, transcription of the world for people who are blind or for deaf people to be able to communicate more clearly with hearing people. I think there's a lot of ways in which AI can augment our weak human bodies in ways that are beneficial for people and not simply increasing the control that their governments and their employers have over their lives and their bodies.

JASON KELLEY
We’re talking to Kit and Jacob both, because this is such a big topic that we really need to come at it from multiple angles to make sense of it and to figure out the answer to the really important question which is, How can AI actually make the world we live in, a better place?

CINDY COHN
So while many other people have been trying to figure out how to cash in on AI, Kit and Jacob have been looking at AI from a public interest and civil liberties perspective on behalf of EFF. And they’ve also been giving a lot of thought to what an ideal AI world looks like.

JASON KELLEY
AI can be more than just another tool that’s controlled by big tech. It really does have the potential to improve lives in a tangible way. And that’s what this discussion is all about. So we’ll start by trying to wade through the hype, and really nail down what AI actually is and how it can and is affecting our daily lives.

KIT WALSH
The confusion is understandable because AI is being used as a marketing term quite a bit, rather than as an abstract concept, rather than as a scientific concept.
And the ways that I think about AI, particularly in the decision-making context, which is one of our top priorities in terms of where we think that AI is impacting people's rights, is first I think about what kind of technology are we really talking about because sometimes you have a tool that actually no one is calling AI, but it is nonetheless an example of algorithmic decision-making.
That also sounds very fancy. This can be a fancy computer program to make decisions, or it can be a buggy Excel spreadsheet that litigators discover is actually just omitting important factors when it's used to decide whether people get health care or not in a state health care system.

CINDY COHN
You're not making those up, Kit. These are real examples.

KIT WALSH
That’s not a hypothetical. Unfortunately, it’s not a hypothetical, and the people who litigated that case lost some clients because when you're talking about not getting health care that can be life or death. And machine learning can either be a system where you – you, humans, code a reinforcement mechanism. So you have sort of random changes happening to an algorithm, and it gets rewarded when it succeeds according to your measure of success, and rejected otherwise.
It can be training on vast amounts of data, and that's really what we've seen a huge surge in over the past few years, and that training can either be what's called unsupervised, where you just ask your system that you've created to identify what the patterns are in a bunch of raw data, maybe raw images, or it can be supervised in the sense that humans, usually low paid humans, are coding their views on what's reflected in the data.
So I think that this is a picture of a cow, or I think that this picture is adult and racy. So some of these are more objective than others, and then you train your computer system to reproduce those kinds of classifications when it makes new things that people ask for with those keywords, or when it's asked to classify a new thing that it hasn't seen before in its training data.
So that's really a very high level oversimplification of the technological distinctions. And then because we're talking about decision-making, it's really important who is using this tool.
Is this the government which has all of the power of the state behind it and which administers a whole lot of necessary public benefits - that is using decisions to decide who is worthy and who is not to obtain those benefits? Or, who should be investigated? What neighborhoods should be investigated?
We'll talk a little bit more about the use in law enforcement later on, but it's also being used quite a bit in the private sector to determine who's allowed to get housing, whether to employ someone, whether to give people mortgages, and that's something that impacts people's freedoms as well.

CINDY COHN
So Jacob, two questions I used to distill down on AI decision-making are, who is the decision-making supposed to be serving and who bears the consequences if it gets it wrong? And if we think of those two framing questions, I think we get at a lot of the issues from a civil liberties perspective. That sound right to you?

JACOB HOFFMAN-ANDREWS
Yeah, and, you know, talking about who bears the consequences when an AI or technological system gets it wrong, sometimes it's the person that system is acting upon, the person who's being decided whether they get healthcare or not and sometimes it can be the operator.
You know, it's, uh, popular to have kind of human in the loop, like, oh, we have this AI decision-making system that's maybe not fully baked. So there's a human who makes the final call. The AI just advises the human and, uh, there's a great paper by Madeleine Clare Elish describing this as a form of moral crumple zones. Uh, so, you may be familiar in a car, modern cars are designed so that in a collision, certain parts of the car will collapse to absorb the force of the impact.
So the car is destroyed but the human is preserved. And, in some human in the loop decision making systems often involving AI, it's kind of the reverse. The human becomes the crumple zone for when the machine screws up. You know, you were supposed to catch the machine screwup. It didn't screw up in over a thousand iterations and then the one time it did, well, that was your job to catch it.
And, you know, these are obviously, you know, a crumple zone in a car is great. A moral crumple zone in a technological system is a really bad idea. And it takes away responsibility from the deployers of that system who ultimately need to bear the responsibility when their system harms people.

CINDY COHN
So I wanna ask you, what would it look like if we got it right? I mean, I think we do want to have some of these technologies available to help people make decisions.
They can find patterns in giant data probably better than humans can most of the time. And we'd like to be able to do that. So since we're fixing the internet now, I want to stop you for a second and ask you how would we fix the moral crumple zone problem or what were the things we think about to do that?

JACOB HOFFMAN-ANDREWS
You know, I think for the specific problem of, you know, holding say a safety driver or like a human decision-maker responsible for when the AI system they're supervising screws up, I think ultimately what we want is that the responsibility can be applied all the way up the chain to the folks who decided that that system should be in use. They need to be responsible for making sure it's actually a safe, fair system that is reliable and suited for purpose.
And you know, when a system is shown to bring harm, for instance, you know, a self-driving car that crashes into pedestrians and kills them, you know, that needs to be pulled out of operation and either fixed or discontinued.

CINDY COHN
Yeah, it made me think a little bit about, you know, kind of a change that was made, I think, by Toyota years ago, where they let the people on the front line stop the line, right? Um, I think one thing that comes out of that is you need to let the people who are in the loop have the power to stop the system, and I think all too often we don't.
We devolve the responsibility down to that person who's kind of the last fair chance for something but we don't give them any responsibility to raise concerns when they see problems, much less the people impacted by the decisions.

KIT WALSH
And that’s also not an accident of the appeal of these AI systems. It's true that you can't hold a machine accountable really, but that doesn't deter all of the potential markets for the AI. In fact, it's appealing for some regulators, some private entities, to be able to point to the supposed wisdom and impartiality of an algorithm, which if you understand where it comes from, the fact that it's just repeating the patterns or biases that are reflected in how you trained it, you see it's actually, it's just sort of automated discrimination in many cases and that can work in several ways.
In one instance, it's intentionally adopted in order to avoid the possibility of being held liable. We've heard from a lot of labor rights lawyers that when discriminatory decisions are made, they're having a lot more trouble proving it now because people can point to an algorithm as the source of the decision.
And if you were able to get insight in how that algorithm were developed, then maybe you could make your case. But it's a black box. A lot of these things that are being used are not publicly vetted or understood.
And it's especially pernicious in the context of the government making decisions about you, because we have centuries of law protecting your due process rights to understand and challenge the ways that the government makes determinations about policy and about your specific instance.
And when those decisions and when those decision-making processes are hidden inside an algorithm then the old tools aren't always effective at protecting your due process and protecting the public participation in how rules are made.

JASON KELLEY
It sounds like in your better future, Kit, there's a lot more transparency into these algorithms, into this black box that's sort of hiding them from us. Is that part of what you see as something we need to improve to get things right?

KIT WALSH
Absolutely. Transparency and openness of AI systems is really important to make sure that as it develops, it develops to the benefit of everyone. It's developed in plain sight. It's developed in collaboration with communities and a wider range of people who are interested and affected by the outcomes, particularly in the government context though I'll speak to the private context as well. When the government passes a new law, that's not done in secret. When a regulator adopts a new rule, that's also not done in secret. There's either, sure, that's, there are exceptions.

CINDY COHN
Right, but that’s illegal.

JASON KELLEY
Yeah, that's the idea. Right. You want to get away from that also.

KIT WALSH
Yeah, if we can live in Kitopia for a moment where, where these things are, are done more justly, within the framework of government rulemaking, if that's occurring in a way that affects people, then there is participation. There's meaningful participation. There's meaningful accountability. And in order to meaningfully have public participation, you have to have transparency.
People have to understand what the new rule is that's going to come into force. And because of a lot of the hype and mystification around these technologies, they're being adopted under what's called a procurement process, which is the process you use to buy a printer.
It's the process you use to buy an appliance, not the process you use to make policy. But these things embody policy. They are the rule. Sometimes when the legislature changes the law, the tool doesn't get updated and it just keeps implementing the old version. And that means that the legislature's will is being overridden by the designers of the tool.

JASON KELLEY
You mentioned predictive policing, I think, earlier, and I wonder if we could talk about that for just a second because it's one way where I think we at EFF have been thinking a lot about how this kind of algorithmic decision-making can just obviously go wrong, and maybe even should never be used in the first place.
What we've seen is that it's sort of, you know, very clearly reproduces the problems with policing, right? But how does AI or this sort of predictive nature of the algorithmic decision-making for policing exacerbate these problems? Why is it so dangerous I guess is the real question.

KIT WALSH
So one of the fundamental features of AI is that it looks at what you tell it to look at. It looks at what data you offer it, and then it tries to reproduce the patterns that are in it. Um, in the case of policing, as well as related issues around decisions for pretrial release and parole determinations, you are feeding it data about how the police have treated people, because that's what you have data about.
And the police treat people in harmful, racist, biased, discriminatory, and deadly ways that it's really important for us to change, not to reify into a machine that is going to seem impartial and seem like it creates a veneer of justification for those same practices to continue. And sometimes this happens because the machine is making an ultimate decision, but that's not usually what's happening.
Usually the machine is making a recommendation. And one of the reasons we don't think that having a human in the loop is really a cure for the discriminatory harms is that humans are more likely to follow the AI if it gives them cover for a biased decision that they're going to make. And relatedly, some humans, a lot of people, develop trust in the machine and wind up following it quite a bit.
So in these contexts, if you really wanted to make predictions about where a crime was going to occur, well it would send you to Wall Street. And that's not, that's not the result that law enforcement wants.
But, first of all, you would actually need data about where crimes occur, and generally people who don't get caught by the police are not filling out surveys to say, here are the crimes I got away with so that you can program a tool that's going to do better at sort of reflecting some kind of reality that you're trying to capture. You only know how the system has treated people so far and all that you can do with AI technology is reinforce that. So it's really not an appropriate problem to try to solve with this technology.

CINDY COHN
Yeah, our friends at Human Rights Data Analysis Group who did some of this work said, you know, we call it predictive policing, but it's really predicting the police because we're using what the police already do to train up a model, and of course it's not going to fix the problems with how police have been acting in the past. Sorry to interrupt. Go on.

KIT WALSH
No, to build on that, by definition, it thinks that the past behavior is ideal, and that's what it should aim for. So, it's not a solution to any kind of problem where you're trying to change a broken system.

CINDY COHN
And in fact, what they found in the research was that the AI system will not only replicate what the police do, it will double down on the bias because it's seeing a small trend and it will increase the trend. And I don't remember the numbers, but it's pretty significant. So it's not just that the AI system will replicate what the police do. What they found in looking at these systems is that the AI systems increase the bias in the underlying data.
It's really important that we continue to emphasize the ways in which AI and machine learning are already being used and already being used in ways that people may not see, but dramatically impact them. But right now, what's front of mind for a lot of people is generative AI. And I think many, many more people have started playing around with that. And so I want to start with how we think about generative AI and the issues it brings. And Jacob, I know you have some thoughts about that.

JACOB HOFFMAN-ANDREWS
Yeah. To call back to, at the beginning you asked about, how do we define AI? I think one of the really interesting things in the field is that it's changed so much over time. And, you know, when computers first became broadly available, you know, people have been thinking for a very long time, what would it mean for a computer to be intelligent? And for a while we thought, wow, you know, if a computer could play chess and beat a human, we would say that's an intelligent computer.
Um, if a computer could recognize, uh, what's in an image, is this an image of a cat or a cow - that would be intelligence. And of course now they can, and we don't consider it intelligence anymore. And you know, now we might say if a computer could write a term paper, that's intelligence and I don't think we're there yet, but the development of chatbots does make a lot of people feel like we're closer to intelligence because you can have a back and forth and you can ask questions and receive answers.
And some of those answers will be confabulations and, but some percentage of the time they'll be right. And it starts to feel like something you're interacting with. And I think, rightly so, people are worried that this will destroy jobs for writers and for artists. And to an earlier question about, you know, what does it look like if we get it right, I think, you know, the future we want is one where people can write beautiful things and create beautiful things and, you know, still make a great living at it and be fulfilled and safe in their daily needs and be recognized for that. And I think that's one of the big challenges we're facing with generative AI.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. How to Fix the Internet is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our discussion with Kit and Jacob about AI: the good, the bad, and what could be better.

CINDY COHN
There’s been a lot of focus on the dark side of generative AI and the idea of using copyright to address those problems has emerged. We have worries about that as a way to sort out between good and bad uses of AI, right Kit?

KIT WALSH
Absolutely. We have had a lot of experience with copyright being used as a tool of censorship, not only against individual journalists and artists and researchers, but also against entire mediums for expression, against libraries, against the existence of online platforms where people are able to connect and copyright not only lasts essentially forever, it comes with draconian penalties that are essentially a financial death sentence for the typical person in the United States. So in the context of generative AI, there is a real issue with the potential to displace creative labor. And it's a lot like the issues of other forms of automation that displace other forms of labor.
And it's not always the case that an equal number of new jobs are created, or that those new jobs are available to the people who have been displaced. And that's a pretty big social problem that we have. In Kitopia, we have AI and it's used so that there's less necessary labor to achieve a higher standard of living for people, and we should be able to be excited about automation of labor tasks that aren't intrinsically rewarding.
One of the reasons that we're not is because the fruits of that increased production flow to the people who own the AI, not to the people who were doing that labor, who now have to find another way to trade their labor for money or else become homeless and starve and die, and that's cruel.
It is the world that we're living in so it's really understandable to me that an artist is going to want to reach for copyright, which has the potential of big financial damages against someone who infringes, and is the way that we've thought about monetization of artistic works. I think that way of thinking about it is detrimental, but I also think it's really understandable.
One of the reasons why the particular legal theories in the lawsuits against generative AI technologies are concerning is because they wind up stretching existing doctrines of copyright law. So in particular, the very first case against Stable Diffusion argued that you were creating an infringing derivative work when you trained your model to recognize the patterns in five billion images.
It's a derivative work of each and every one of them. And that can only succeed as a legal theory if you throw out the existing understanding of what a derivative work is, that it has to be substantially similar to a thing that it's infringing and that limitation is incredibly important for human creativity.
The elements of my work that you might recognize from my artistic influences in the ordinary course of artistic borrowing and inspiration are protected. I'm able to make my art without people coming after me because I like to draw eyes the same way as my inspiration or so on, because ultimately the work is not substantially similar.
And if we got rid of that protection, it would be really bad for everybody.
But at the same time, you can see how someone might say, why should I pay a commission to an artist if I can get something in the same style? To which I would say, try it. It's not going to be what you want because art is not about replicating patterns that are found in a bunch of training data.
It can be a substitute for stock photography or other forms of art that are on the lower end of how much creativity is going into the expression, but for the higher end, I think that part of the market is safe. So I think all artists are potentially impacted by this. I'm not saying only bad artists have to care, but there is this real impact.
Their financial situation is precarious already, and they deserve to make a living, and this is a bandaid because we don't have a better solution in place to support people and let them create in a way that is in accord with their values and their goals. We really don't have that either in the situation where people are primarily making their income doing art that a corporation wants them to make to maximize its products.
No artist wants to create assets for content. Artists want to express and create new beauty and new meaning and the system that we have doesn't achieve that. We can certainly envision better ones but in the meantime, the best tool that artists have is banding together to negotiate with collective power, and it's really not a good enough tool at this point.
But I also think there's a lot of room to ethically use generative AI if you're working with an artist and you're trying to communicate your vision for something visual, maybe you're going to use an AI tool in order to make something that has some of the elements you're looking for and then say this, this is what I want to pay you to, to draw. I want this kind of pose, right? But, but, more unicorns.

JASON KELLEY
And I think while we're talking about these sort of seemingly good, but ultimately dangerous solutions for the different sort of problems that we're thinking about now more than ever because of generative AI, I wanted to talk with Jacob a little bit about watermarking. And this is meant to solve a sort of problem of knowing what is and is not generated by AI.
And people are very excited about this idea that through some sort of, well, actually you just explain Jacob, cause you are the technologist. What is watermarking? Is this a good idea? Will this work to help us understand and distinguish between AI-generated things and things that are just made by people?

JACOB HOFFMAN-ANDREWS
Sure. So a very real and closely related risk of generative AI is that it is - it will, and already is - flooding the internet with bullshit. Uh, you know, many of the articles you might read on any given topic, these days the ones that are most findable are often generated by AI.
And so an obvious next step is, well, what if we could recognize the stuff that's written by AI or the images that are generated by AI, because then we could just skip that. You know, I wouldn't read this article cause I know it's written by AI or you can go even a step further, you could say, well, maybe search engines should downrank things that were written by AI or social networks should label it or allow you to opt out of it.
You know, there's a lot of question about, if we could immediately recognize all the AI stuff, what would we do about it? There's a lot of options, but the first question is, can we even recognize it? So right off the bat, you know, when ChatGPT became available to the public, there were people offering ChatGPT detectors. You know, you could look at this content and, you know, you can kind of say, oh, it tends to look like this.
And you can try to write something that detects its output, and the short answer is it doesn't work and it's actually pretty harmful. A number of students have been harmed because their instructors have run their work through a ChatGPT detector, an AI detector that has incorrectly labeled it.
There's not a reliable pattern in the output that you can always see. Well, what if the makers of the AI put that pattern there? And, you know, for a minute, let's switch from text based to image based stuff. Jason, have you ever gone to a stock photo site to download a picture of something?

JASON KELLEY
I sadly have.

JACOB HOFFMAN-ANDREWS
Yeah. So you might recognize the images they have there, they want to make sure you pay for the image before they use it. So there's some text written across it in a kind of ghostly white diagonal. It says, this is from say shutterstock.com. So that's a form of watermark. If you just went and downloaded that image rather than paying for the cleaned up version, there's a watermark on it.
So the concept of watermarking for AI provenance is that It would be invisible. It would be kind of mixed into the pixels at such a subtle level that you as a human can't detect it, but you know, a computer program designed to detect that watermark could so you could imagine the AI might generate a picture and then in the top left pixel, increase its shade by the smallest amount, and then the next one, decrease it by the smallest amount and so on throughout the whole image.
And you can encode a decent amount of data that way, like what system produced it, when, all that information. And actually the EFF has published some interesting research in the past on a similar system in laser printers where little yellow dots are embedded by certain laser printers, by most laser printers that you can get as an anti counterfeiting measure.

JASON KELLEY
This is one of our most popular discoveries that comes back every few years, if I remember right, because people are just gobsmacked that they can't see them, but they're there, and that they have this information. It's a really good example of how this works.

CINDY COHN
Yeah, and it's used to make sure that they can trace back to the printer that printed anything on the off chance that what you're printing is fake money.

JACOB HOFFMAN-ANDREWS
Indeed, yeah.
The other thing people really worry about is that AI will make it a lot easier to generate disinformation and then spread it and of course if you're generating disinformation it's useful to strip out the watermark. You would maybe prefer that people don't know it's AI. And so you're not limited to resizing or cropping an image. You can actually, you know, run it through a program. You can see what the shades of all the different pixels are. And you, in theory probably know what the watermarking system in use is. And given that degree of flexibility, it seems very, very likely - and I think past technology has proven this out - that it's not going to be hard to strip out the watermark. And in fact, it's not even going to be hard to develop a program to automatically strip out the watermark.

CINDY COHN
Yep. And you, you end up in a cat and mouse game where the people who you most want to catch, who are doing sophisticated disinformation, say to try to upset elections, are going to be able to either strip out the watermark or fake it and so you end up where the things that you most want to identify are probably going to trick people. Is that, is that the way you're thinking about it?

JACOB HOFFMAN-ANDREWS
Yeah, that's pretty much what I'm getting at. I wanted to say one more thing on, um, watermarking. I'd like to talk about chainsaw dogs. There's this popular genre of image on Facebook right now of a man and his chainsaw carved wooden dog and, often accompanied by a caption like, look how great my dad is, he carved this beautiful thing.
And these are mostly AI generated and they receive, you know, thousands of likes and clicks and go wildly viral. And you can imagine a weaker form of the disinformation claim of say, ‘Well, okay, maybe state actors will strip out watermarks so they can conduct their disinformation campaigns, but at least adding watermarks to AI images will prevent this proliferation of garbage on the internet.’
People will be able to see, oh, that's a fake. I'm not going to click on it. And I think the problem with that is even people who are just surfing for likes on social media actually love to strip out credits from artists already. You know, cartoonists get their signatures stripped out and in the examples of these chainsaw dogs, you know, there is actually an original.
There's somebody who made a real carving of a dog. It was very skillfully executed. And these are generated using kind of image to image AI, where you take an image and you generate an image that has a lot of the same concepts. A guy, a dog, made of wood and so they're already trying to strip attribution in one way.
And I think likely they would also find a way to strip any watermarking on the images they're generating.

CINDY COHN
So Jacob, we heard earlier about Kit's ideal world. I'd love to hear about the future world that Jacob wants us to live in.

JACOB HOFFMAN-ANDREWS
Yeah. I think the key thing is, you know, that people are safer in their daily lives than they are today. They're not worried about their livelihoods going away. I think this is a recurring theme when most new technology is invented that, you know, if it replaces somebody's job, and that person's job doesn't get easier, they don't get to keep collecting a paycheck. They just lose their job.
So I think in the ideal future, people have a means to live and to be fulfilled in their lives to do meaningful work still. And also in general, human agency is expanded rather than restricted. The promise of a lot of technologies that, you know, you can do more in the world, you can achieve the conditions you want in your life.

CINDY COHN
Oh that sounds great. I want to come back to you Kit. We've talked a little about Kitopia, including at the top of the show. Let's talk a little bit more. What else are we missing?

KIT WALSH
So in Kitopia, people are able to use AI if it's a useful part of their artistic expression, they're able to use AI if they need to communicate something visual when I'm hiring a concept artist, when I am getting a corrective surgery, and I want to communicate to the surgeon what I want things to look like.
There are a lot of ways in which words don't communicate as well as images. And not everyone has the skill or the time or interest to go and learn a bunch of photoshop to communicate with their surgeon. I think it would be great if more people were interested and had the leisure and freedom to do visual art.
But in Kitopia, that's something that you have because your basic needs are met. And in part, automation is something that should help us do that more. The ability to automate aspects of, of labor should wind up benefiting everybody. That's the vision of AI in Kitopia.

CINDY COHN
Nice. Well that's a wonderful place to end. We're all gonna pack our bags and move to Kitopia. And hopefully by the time we get there, it’ll be waiting for us.
You know, Jason, that was such a rich conversation. I'm not sure we need to do a little recap like we usually do. Let's just close it out.

JASON KELLEY
Yeah, you know, that sounds good. I'll take it from here. Thanks for joining us for this episode of How to Fix the Internet. If you have feedback or suggestions, we would love to hear from you. You can visit EFF.org slash podcasts to click on listener feedback and let us know what you think of this or any other episode.
You can also get a transcript or information about this episode and the guests. And while you're there of course, you can become an EFF member, pick up some merch, or just see what's happening in digital rights this or any other week. This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.
In this episode, you heard Kalte Ohren by Alex featuring starfrosch & Jerry Spoon; lost Track by Airtone; Come Inside by Zep Hume; Xena's Kiss/Medea's Kiss by MWIC; Homesick By Siobhan D and Drops of H2O ( The Filtered Water Treatment ) by J.Lang. Our theme music is by Nat Keefe of BeatMower with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

 

Podcast Episode: AI on the Artist's Palette

Par : Josh Richman
4 juin 2024 à 03:06

Collaging, remixing, sampling—art always has been more than the sum of its parts, a synthesis of elements and ideas that produces something new and thought-provoking. Technology has enabled and advanced this enormously, letting us access and manipulate information and images in ways that would’ve been unimaginable just a few decades ago.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

For Nettrice Gaskins, this is an essential part of the African American experience: The ability to take whatever is at hand—from food to clothes to music to visual art—and combine it with life experience to adapt it into something new and original. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how she takes this approach in applying artificial intelligence to her own artwork, expanding the boundaries of Black artistic thought.  

In this episode you’ll learn about: 

  • Why making art with AI is about much more than just typing a prompt and hitting a button 
  • How hip-hop music and culture was an early example of technology changing the state of Black art 
  • Why the concept of fair use in intellectual property law is crucial to the artistic process 
  • How biases in machine learning training data can affect art 
  • Why new tools can never replace the mind of a live, experienced artist 

Dr. Nettrice R. Gaskins is a digital artist, academic, cultural critic, and advocate of STEAM (science, technology, engineering, arts, and math) fields whose work she explores "techno-vernacular creativity" and Afrofuturism. She teaches, writes, "fabs,” and makes art using algorithms and machine learning. She has taught multimedia, visual art, and computer science with high school students, and now is assistant director of the Lesley STEAM Learning Lab at Lesley University.  She was a 2021 Ford Global Fellow, serves as an advisory board member for the School of Literature, Media, and Communication at Georgia Tech, and is the author of “Techno-Vernacular Creativity and Innovation” (2021). She earned a BFA in Computer Graphics with honors from Pratt Institute in 1992; an MFA in Art and Technology from the School of the Art Institute of Chicago in 1994; and a doctorate in Digital Media from Georgia Tech in 2014.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

NETTRICE GASKINS
I just think we have a need to remix, to combine, and that's where a lot of our innovation comes from, our ability to take things that we have access to. And rather than see it as a deficit, I see it as an asset because it produces something beautiful a lot of the times. Something that is really done for functional reasons or for practical reasons, or utilitarian reasons is actually something very beautiful, or something that takes it beyond what it was initially intended to be.

CINDY COHN
That's Nettrice Gaskins. She’s a professor, a cultural critic and a digital artist who has been using algorithms and generative AI as a part of her artistic practice for years.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fray when they DO go wrong. But this show is about envisioning, and hopefully helping create, a better future.

JASON KELLEY
Our guest today is Nettrice Gaskins. She’s the assistant director of the Lesley STEAM learning lab at Lesley University and the author of Techno-Vernacular Creativity and Innovation. Her artwork has been featured by the Smithsonian, among many other institutions.

CINDY COHN
Nettrice has spoken about how her work creating art using generative AI prompts is directly related to remix culture and hip hop and collage. There’s a rich tradition of remixing to create new artworks that can be more than the sum of their parts, and – at least the way that Nettrice uses it – generative AI is another tool that can facilitate this kind of art. So we wanted to start the conversation there.

NETTRICE GASKINS
Even before hip hop, even the food we ate, um, poor people didn't have access to, you know, ham or certain things. So they used the intestines of a pig and then they created gumbo, because they had a little bit of this and a little bit of that and they found really creative and innovative ways to put it all together that is now seen as a thing to have, or have tried. So I think, you know, when you have around the world, not just in the United States, but even in places that are underserved or disenfranchised you have this, still, need to create, and to even innovate.

And I think a lot of the history of African Americans, for example, in the United States, they weren't permitted to have their own languages. But they found ways to embed it in language anyway. They found ways to embed it in the music.

So I think along the way, this idea of what we now know as remixing or sampling or collage has been there all along and this is just one other way.  I think that once you explain how generative AI works to people who are familiar with remixing and all this thing in the history, it clicks in many ways.
Because it starts to make sense that it is instead of, you know, 20 different magazines I can cut images out and make a collage with, now we're talking about thousands of different, pieces of information and data that can inform how an image is created and that it's a prediction and that we can create all these different predictions. It sounds a lot like what happens when we were looking at a bunch of ingredients in the house and realizing we had to make something from nothing and we made gumbo.

And that gumbo can take many different forms. There's a gumbo in this particular area of the country, then there's gumbo in this particular community, and they all have the same idea, but the output, the taste, the ingredients are different. And I think that when you place generative AI in that space, you're talking about a continuum. And that's kind of how I treat it when I'm working with gen AI.

CINDY COHN
I think that's so smart. And the piece of that that's important that's kind of inherent in the way you're talking about it, is that the person doing the mixing, right? The chef, right, is the one who who does the choices and who's the chef matters, right?

NETTRICE GASKINS
And also, you know, when they did collage, there's no attribution. So if you look at a Picasso work that's done collage, he didn't, you know, all the papers, newspapers that he took from, there's no list of what magazines those images came from, and you could have hundreds to 50 to four different references, and they created fair use kind of around stuff like that to protect, you know, works that are like, you know, collage or stuff from modern art.

And we're in a situation where those sources are now quadrupled, it's not even that, it's like, you know, how many times, as opposed to when we were just using paper, or photographs.

We can't look at it the same because the technology is not the same, however, some of the same ideas can apply. Anybody can do collage, but what makes collage stand out is the power of the image once it's all done. And in some cases people don't want to care about that, they just want to make collage. They don't care, they're a kid and they just want to make paper and put it together, make a greeting card and give it to mom.

Other people make some serious work, sometimes very detailed using collage, and that's just paper, we're not even talking about digital collage, or the ways we use Adobe Photoshop to layer images and create digital collages, and now Photoshop's considered to be an AI generator as well. SoI think that if we look in the whole continuum of modern art, and we look at this need to curate abstractions from things from life.

And, you know, Picasso was looking at African art, there's a way in which they abstracted that he pulled it into cubism, him and many other artists of his time. And then other artists looked at Picasso and then they took it to whatever level they took it to. But I think we don't see the continuum. We often just go by the tool or go by the process and not realize that this is really an extension of what we've done before. Which is how I view gen AI. And the way that I use it is oftentimes not just hitting a button or even just cutting and pasting. It is a real thoughtful process about ideas and iteration and a different type of collage.

CINDY COHN
I do think that this bridges over into, you know, an area where EFF does a lot of work, right, which is really making sure we have a robust Fair Use doctrine that doesn't get stuck in one technology, but really can grow because, you know we definitely had a problem with hip hop where the, kind of, over-copyright enforcement really, I think, put a damper on a lot of stuff that was going on early on.

I don't actually think it serves artists either, that we have to look elsewhere as a way to try to make sure that we're getting artists paid rather than trying to control each piece and make sure that there's a monetization scheme that's based upon the individual pieces. I don't know if you agree, but that's how I think about it.

NETTRICE GASKINS
Yeah, and I, you know, just like we can't look at collage traditionally and then look at gen AI as exactly the same. There's some principles and concepts around that I think they're very similar, but, you know, there's just more data. This is much more involved than just cutting and pasting on canvas board or whatever, that we're doing now.

You know, I grew up with hip hop, hip hop is 50 this year, I'm 53, so I was three, so hip hop is my whole life. You know, from the very beginning to, to now. And I've also had some education or some training in sampling. So I had a friend who was producing demos for, and I would sit there all night and watch him splice up, you know, different sounds. And eventually I learned how to do it myself. So I know the nature of that. I even spliced up sampled musics further to create new compositions with that.

And so I'm very much aware of that process and how it connects even from the visual arts side, which is mostly what I am as a visual artist, of being able to splice up and, and do all that. And I was doing that in 1992.

CINDY COHN
Nice.

NETTRICE GASKINS
I was trying to do it in 1987, when the first time I used Amiga and DePaint, I was trying to make collages then in addition to what I was doing in my visual arts classes outside of that. So I've always been interested in this idea, but if you look at the history of even the music, these were poor kids living in the Bronx. These were poor kids and they couldn't afford all the other things, the other kids who were well off, so they would go to the trash bins and take equipment and re-engineer it and come up with stuff that now DJs around the world are using. That people around the world are doing, but they didn't have, so they had to be innovative. They had to think outside the box. And they had to use – they weren't musicians. They didn't have access to instruments, but they did have access to was records. And they had access to, you know, discarded electronics and they were able to figure out a way to stretch out a rhythm so that people could dance to it.

They had the ability to layer sounds so that there was no gap between one album and the next, so they could continue that continuous play so that the party kept going. They found ways to do that. They didn't go to a store and buy anything that made that happen. They made it happen by tinkering and doing all kinds of things with the equipment that they had access to, which is from the garbage.

CINDY COHN
Yeah, absolutely. I mean, Grandmaster Flash and the creation of the crossfader and a lot of actual, kind of, old school hardware development, right, came out of that desire and that recognition that you could take these old records and cut them up, right? Pull the, pull the breaks and, and play them over and over again. And I just think that it's pulling on something very universal. Definitely based upon the fact that a lot of these kids didn't have access to formal instruments and formal training, but also just finding a way to make that music, make that party still go despite that, there's just something beautiful about that.

And I guess I'm, I'm hoping, you know, AI is quite a different context at this point, and certainly it takes a lot of money to build these models. But I'm kind of interested in whether you think we're headed towards a future where these foundational models or the generative AI models are ubiquitous and we'll start to see the kids of the future picking them up and building new things out of them.

NETTRICE GASKINS
I think they could do it now. I think that with the right situation where they could set up a training model and figure out what data they wanted to go into the model and then use that model and build it over time. I just think that it's the time and the space, just like the time and the space that people had to create hip hop, right?

The time and the space to get in a circle and perform together or get into a room and have a function or party. I think that it was the time. And I think that, we just need that moment in this space to be able to produce something else that's more culturally relevant than just something that's corporate.
And I think my experiences as an artist, as someone who grew up around hip-hop all my life, some of the people that I know personally are pioneers in that space of hip-hop. But also, I don't even stay in hip-hop. You know, I was talking about sashiko, man, that's a Japanese hand-stitching technique that I'm applying, remixing to. And for me to do that with Japanese people, you know, and then their first concern was that I didn't know enough about the sashiko to be going there. And then when I showed them what I knew, they were shocked. Like, when I go into, I go deep in. And so they were very like, Oh, okay. No, she knows.

Sashiko is a perfect example. If you don't know about sashiko embroidery and hand stitching, there were poor people and they wanted to stretch out the fabrics and the clothing for longer because they were poor. So they figure out ways to create these intricate stitching patterns that reinforced the fabric so that it would last longer because they were poor. And then they would do patches, like patchwork quilts and they it was both a quilting and embroidery technique for poor people, once again, using what they had.

When we think about gumbo, here's another situation of people who didn't have access to fancy clothing or fancy textiles, but found a way. And then the work that they did was beautiful. Aesthetically, it was utilitarian in terms of why they did it. But now we have this entire cultural art form that comes out of that, that's beautiful.

And I think that's kind of what has happened along the way. You know, we are, just like there are gatekeepers in the art world so the Picassos get in, but not necessarily. You know, I think about Romare Bearden, who did get into some of the museums and things. But most people, they know of Picasso, but they don't know about Romare Bearden who decided to use collage to represent black life.

But I also feel like, we talk about equity, and we talk about who gets in, who has the keys. Where the same thing occurs in generative AI. Or just AI in general, I don't know, the New York Times had an article recently listed all the AI pioneers and no women were involved, it was just men. And then so it was a Medium article, here were 13, 15 women you could have had in your list. Once again, we see it again, where people are saying who holds the keys. These are the people that hold the keys. And in some cases, it's based on what academic institution you're at.

So again, who holds the keys? Even in the women who are listed. MITs, and the Stanfords, and somewhere out there, there's an AI innovator who isn't in any of those institutions, but is doing some cool things within a certain niche, you know, so we don't hear those stories, but there's not even opening to explore that, that person who wrote and just included those men didn't even think about women, didn't even think about the other possibilities of who might be innovating in space.

And so we continue to have this year in and year out every time there's a new change in our landscape, we still have the same kinds of historical omissions that have been going on for many years.

JASON KELLEY
Could we lift up some of the work that you have, have been doing and talk about like the specific process or processes that you've used? How do you actually use this? 'Cause I think a lot of people probably that listen, just know that you can go to a website and type in a prompt and get an image, and they don't know about, like, training it, how you can do that yourself and how you've done it. So I'm wondering if you could talk a little bit about your specific process.

NETTRICE GASKINS
So, I think, you know, people were saying, especially maybe two years ago, that my color scheme was unusually advanced for just using Gen AI. Well, I took two semesters of mandatory color theory in college.

So I had color theory training long before this stuff popped up. I was a computer graphics major, but I still had to take those classes. And so, yeah, my sense of color theory and color science is going to be strong because I had to do that every day as a freshman. And so that will show up.

I've had to take drawing, I've had to take painting. And a lot of those concepts that I learned as an art student go into my prompts. So that's one part of it. I'm using colors. I know the compliment. I know the split compliments.

I know the interactions between two colors that came from training, from education, of being in the classroom with a teacher or professor, but also, like one of my favorite books is Cane by an author named Jean Toomer. He only wrote one book, but it's a series of short stories. I love it. It's so visual. The way he writes is so visual. So I started reinterpreting certain aspects of some of my favorite stories from that book.

And then I started interpreting some of those words and things and concepts and ideas in a way that I think the AI can understand, the generator can understand.

So another example would be Maya Angelou's Phenomenal Woman. There's this part of the poem that talks about oil wells and how, you know, one of the lines. So when I generated my interpretation of that part of the poem, the oil wells weren't there, so I just extended using, in the same generator, my frame and set oil wells and drew a box: In this area of my image, I want you to generate oil wells.

And then I post it and people have this reaction, right? And then I actually put the poem and said, this is Midjourney. It's reinterpretation is not just at the level of reinterpreting the image and how that image like I want to create like a Picasso.

I don't, I don't want my work to look like Picasso at all or anybody. I want my work to look like the Cubist movement mixed with the Fauvists mixed with the collages mixed with this, with … I want a new image to pop up. I want to see something brand new and that requires a lot of prompting, a lot of image prompting sometimes, a lot of different techniques.

And it's a trial and error kind of thing until you kind of find your way through. But that's a creative process. That's not hitting a button. That's not cutting and pasting or saying make this look like Picasso. That's something totally different.

JASON KELLEY
Let’s take a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Nettrice Gaskins.

The way Nettrice talks about her artistic process using generative AI makes me think of that old cliche about abstract art – you know, how people say 'my kid could paint that.' There's a misconception now with Gen AI that people assume you just pop in a few words and boom, you get a piece of art. Sometimes that’s true, but Nettrice's approach goes far beyond a simple prompt.

NETTRICE GASKINS
Well, I did a talk recently, and it may have been for the Philadelphia Museum of Art. I did a lecture and the Q& A, they said, could you just demo? What you do, you have some time. And I remember after I demoed, they said, Oh, that definitely isn't hitting a button. That is much more, now I feel like I should go in there.

And a lot of times people come away, They're feeling like, now I really want to get in there, And see what I can do. Cause it isn't. I was showing, you know, in what, 30 seconds to a minute, basically how I generate images, which is very different than, you know, what they might think. And that was just within Midjourney. Another reason why personally that I got into on the prompt side before it was image style transfer, it was deep style. It wasn't prompt based. So it was about applying a style to. an image. Now you can apply many styles to one image. But then it was like, apply a style to this photo. And I spent most of my time in generative AI doing that until 2021, with DALL-E and Midjourney.

So before that, there were no prompts, it was just images. But then a lot came from that. The Smithsonian show came from that earlier work. It was like right on the edge of DALL-E and all that stuff coming. But I feel like, you know, my approach even then was somehow I didn't see images that reflected me or reflected, um, the type of images I wanted to see.

So that really propelled me into going into generative AI from the image style, applying styles to, for example, there's something if you're in a computer graphics major or you do computer graphics development or CGI, you may know a lot of people would know something called subsurface scattering.
And subsurface scattering is an effect people apply to skin. It's kind of like a milk, it's called glow. It's very well known, you texture and model your, your person based on that. However, it dulls dark skin tones. And if you look at photography and all the years with film and all that stuff, we have all these examples of where things were calibrated a certain way, not quite for darker skin tones. Here we are again, this time with, but there's something called specular reflection or shine, but apparently when applied, it brings up and enhances darker skin tones. So I wondered if I could apply, using neural image style transfer or deep style, if I could apply that shine or subsurface scattering to my photographs and create portraits of darker skin tones that enhanced features.

Well that succeeded. It worked. And I was just using 18th century tapestries that had metallics in them. So they have gold or they, you know, they had that shine in it as the style applied.

CINDY COHN
Ah.

NETTRICE GASKINS
So one of those, I did a bunch of series of portraits called the gilded series. And around the time I was working on that and exploring that, um, Greg Tate, the cultural critic and writer, Greg Tate, passed away in 2021 and, um, I did a portrait. I applied my tapestry, the style, and it was a selfie he had taken of himself. So it wasn't like it was from a magazine or anything like that. And then I put it on social media and immediately his family and friends reached out.
So now it's a 25 foot mural in Brooklyn.

CINDY COHN
Wow.

JASON KELLEY
It's beautiful. I was looking at it earlier. We'll link to it.

CINDY COHN
Yeah, I’ve seen it too.

NETTRICE GASKINS
And that was not prompt based, that's just applying some ideas around specular reflection and it says from the Gilded Series on the placard. But that is generative AI. And that is remixing. Some of that is in Photoshop, and I Photoshopped, and some of that is three different outputs from the generator that were put together and combined in Photoshop to make that image.

And when it's nighttime, because it has metallics in there, there's a little bit of a shine to the images. When I see people tag me, if they're driving by in the car, you see that glow. I mean, you see that shine, and it, it does apply. And that came from this experimenting with an idea using generative AI.

CINDY COHN
So, and when people are thinking about AI right now, you know, we've really worked hard and EFF has been part of this, but others as well, is to put the threat of bias and bias kind of as something we also have to talk about because it's definitely been historically a problem with, uh, AI and machine learning systems, including not recognizing black skin.

And I'm wondering as somebody who's playing with this a lot, how do you think about the role bias plays and how to combat it. And I think your stories kind of do some of this too, but I'd love to hear how you think about combating bias. And I have a follow up question too, but I want to start with that.

NETTRICE GASKINS
Yeah, some of the presentations I've done, I did a Power of Difference for Bloomberg, was talking to the black community about generative AI. There was a paper I read a month or two ago, um, they did a study for all the main popular AI generators, like Stable Diffusion, Midjourney, DALL-E, maybe another, and they did an experiment to show bias, to show why this is important, and one of the, the prompt was portrait, a portrait of a lawyer. And they did it in all, and it was all men...

CINDY COHN
I was going to say it didn't look like me either. I bet.

NETTRICE GASKINS
I think it was DALL-E was more diverse. So all men, but it was like a black guy. It was like, you know, they were all, and then there was like a racially ambiguous guy. And, um, was it Midjourney, um, for Deep Dream Generator, it was just a black guy with a striped shirt.

But for Portrait of a Felon. Um, Midjourney had kind of a diverse, still all men, but for kind of more diverse, racially ambiguous men. But DALL-E produced three apes and a black man. And so my comment to the audience or to listeners is, we know that there's history in Jim Crow and before that about linking black men, black people to apes. Somehow that's in the, that was the only thing in the prompt portrait of a felon and there are three apes and a black man. How do apes play into "felon?" The connection isn't "felon," the connection is the black man, and then to the apes. That's sitting somewhere and it easily popped up.

And there’s been scary stuff that I've seen in Midjourney, for example. And I'm trying to do a blues musician and it gives me an ape with a guitar. So it's still, you know, and I said, so there's that, and it's still all men, right?

So then because I have a certain particular knowledge, I do know of a lawyer who was Constance Baker Motley. So I did a portrait of Constance Baker Motley, but you would have to know that. If I'm a student or someone, I don't know any lawyers and I do portrait of a lawyer for an assignment or portrait of whatever, who knows what might pop up and then how do I process that?

We see bias all the time. I could, because of who I am, and I know history, I know why the black man and the apes or animals popped up for "felon," but it still happened, and we still have this reality. And so to offset that one of the things is, has it needed, in order to offset some of that is artists or user intervention.
So we intervene by changing the image. Thumbs up, thumbs down. Or we can, in the prediction, say, this is wrong. This is not the right information. And eventually it trains the model not to do that. Or we can create a Constance Baker Motley, you know, of our own to offset that, but we would have to have that knowledge first.

And a lot of people don't have that knowledge first. I can think of a lawyer off the top, you know, that's a black woman that, you know, is different from what I got from the AI generators. But if that intervention right now is key, and then we gotta have more people who are looking at the data, who are looking at the data sources, and are also training the model, and more ways for people from diverse groups to train the model, or help train the model, so we get better results.

And that hasn't, that usually doesn't happen. These happen easily. And so that's kind of my answer to that.

CINDY COHN
One of the stories that I've heard you tell is about the, working with these dancers in Trinidad and training up a model of the Caribbean dancers. And I'm wondering if one of the ways you think about addressing bias is, I guess, same with your lawyer story, is like sticking other things into the model to try to give it a broader frame than it might otherwise have, or in the training data.

But I'm, I'm wondering if that's something you do a lot of, and, and I, I might ask you to tell that story about the dancers, because I thought it was cool.

NETTRICE GASKINS
That was the Mozilla Foundation sponsored project for many different artists and technologists to interrogate AI - Generative AI specifically, but AI in general. And so we did choose, 'cause two of my theme, it was a team of three women, me and two other women. One's a dancer, one's an architect, but we, those two women are from the Caribbean.

And so because during the lockdown there was no festival, there was no carnival, a lot of people, across those cultures were doing it on Zoom. So we're having Zoom parties. So we just had Zoom parties with the data we were collecting. We were explaining generative AI and what we were doing, how it worked to the Caribbean community.

CINDY COHN
Nice.

NETTRICE GASKINS
And then we would put the music on and dance, so we were getting footage from the people who are participating. And then using PoseNet and machine learning to produce an app that allows you to dance with yourself, mini dancer, or to dance with shapes and, or create color painting with movement that was colors with colors from Carnival.

And one of the members, Vernelle Noel, she was using GAN, Generative Adversarial Networks to produce costuming, um, that you might see, but in really futuristic ways, using GAN technology. So different ways we could do that. We explored that with the project.

CINDY COHN
One of the things that, again, I'm kind of feeding you stuff back from yourself because I found it really interesting as you're talking about, like, using these tools in a liberatory way for liberation, as opposed to surveillance and control. And I wondered if you have some thoughts about how best to do that, like what are the kinds of things you look for in a project to try to see whether it's really based in liberation or based in kind of surveillance and monitoring and control, because that's been a long time issue, especially for people from majority countries.

NETTRICE GASKINS
You know, we were very careful with the data from the Carnival project. We said after a particular set period of time, we would get rid of the data. We were only using it for this project for a certain period of time, and we have, you know, signed, everyone signed off on that, including the participants.
Kind of like IRB if you're an academic, and in some cases, and one, Vernelle, was an academic. So it was done through her university. So there was IRB involved, but, um, I think it was just an art. Uh, but we want to be careful with data. Like we wanted people to know we're going to collect this and then we're going to get rid of it once we, you know, do what we need to do.

And I think that's part of it, but also, you know, people have been doing stuff with surveillance technology for a good minute. Um, artists have been doing, um, statements using surveillance technology. Um, people have been making music. There's a lot of rap music and songs about surveillance. Being watched and you know, I did a in Second Life, I did a wall of eyes that follow you everywhere you go...

CINDY COHN
Oof.

NETTRICE GASKINS
...to curate the feeling of always being watched. And for people who don't know what that's like it created that feeling in them as avatars they were like why am I being watched and I'm like this is you at a, if you're black at a grocery store, if you go to Neiman Marcus, you know go to like a fancy department store. This might be what you feel like. I'm trying to simulate that in virtual 3D was a goal.

I'm not so much trying to simulate. I'm trying to, here's another experience. There are people who really get behind the idea that you're taking from other people's work. And that that is the danger. And some people are doing that. I don't want to say that that's not the case. There are people out there who don't have a visual vocabulary, but want to get in here. And they'll use another person's artwork or their name to play around with tools. They don't have an arts background. And so they are going to do that.

And then there are people like me who want to push the boundaries. And want to see what happens when you mix different tools and do different things. And they never, those people who say that you're taking other people's work, I say opt out. Do that. I still continue because a lot of the work that, there's been so lack of representation from artists like me in the spaces, even if you opt out, it doesn't change my process at all.

And that says a lot about gatekeepers, equity, you know, representation and galleries and museums and all that thing are in certain circles for digital artists like Deviant, you know, it just, it doesn't get at some of the real gray areas around this stuff.

CINDY COHN
I think there's something here about people learning as well, where, you know, young musicians start off and they want to play like Beethoven, right? But at some point you find your own, you need to find your own voice. And that, that, that to me is the, you know, obviously there are people who are just cheaters who are trying to pass themselves off as somebody else and that matters and that's important.

But there's also just this period of, I think, artistic growth, where you kind of start out trying to emulate somebody who you admire, and then through that process, you kind of figure out your own voice, which isn't going to be just the same.

NETTRICE GASKINS
And, you know, there was some backlash over a cover that I had done for a book. And then they went, when the publisher came back, they said, where are your sources? It was a 1949 photograph of my mother and her friends. It has no watermark. So we don't know who took the photo. And obviously, from 1949, it's almost in the public domain, it's like, right on the edge.

CINDY COHN
So close!

NETTRICE GASKINS
But none of those people live anymore. My mom passed in 2018. So I use that as a source. My mom, a picture of my mom from a photo album. Or something from, if it's a client, they pay for licensing of particular stock photos. In one case, I used three stock photos because we couldn't find a stock photo that represented the character of the book.

So I had to do like a Frankenstein of three to create that character. That's a collage. And then that was uploaded to the generator, after that, to go further.
So yeah, I think that, you know, when we get into the backlash, a lot of people think, this is all you're doing. And then when I open up the window and say, or open up the door and say, look at what I'm doing - Oh, that's not what she was doing at all!

That's because people don't have the education and they're hearing about it in certain circles, but they're not realizing that this is another creative process that's new and it's entering our world that people can reject or not.

Like, people will say digital photography is going to take our jobs. Really, the best photography comes from being in a darkroom. And going through the process with the enlarger and the chemicals. That's the true photography. Not what you do in these digital cameras and all that stuff and using software, that's not real photography. Same kind of idea but here we are talking about something else. But very, very similar reaction.

CINDY COHN
Yeah, I think people tend to want to cling to the thing that they're familiar with as the real thing, and a little slow sometimes to recognize what's going on. And what I really appreciate about your approach is you're really using this like a tool. It's a complicated process to get a really cool new paintbrush that people can create new things with.

And I want to make sure that we're not throwing out the babies with the bathwater as we're thinking about this. And I also think that, you know, my hope and my dream is that in our, in our better technological future, you know, these tools will be far more evenly distributed than say some of the earlier tools, right?
And you know, Second Life and, and things like that, you know, were fairly limited by who could have the financial ability to actually have access. But we have broadened that aperture a lot, not as far as it needs to go now. And so, you know, part of my dream for a better tech future is that these tools are not locked away and only people who have certain access and certain credentials get the ability to use them.

But really, we broaden them out. That, that points towards more open models, open foundational models, as well as, um, kind of a broader range of people being able to play with them because I think that's where the cool stuff's gonna probably come from. That's where the cool stuff has always come from, right?

It hasn't come from the mainstream corporate business model for art. It's come from all the little nooks and crannies where the light comes in.

NETTRICE GASKINS
Yeah. Absolutely.

CINDY COHN
Oh Nettrice, thank you so much for sharing your vision and your enthusiasm with us. This has just been an amazing conversation.

NETTRICE GASKINS
Thanks for having me.

JASON KELLEY
What an incredible conversation to have, in part because, you know, we got to talk to an actual artist about their process and learn that, well, I learned that I know nothing about how to use generative AI and that some people are really, really talented and it comes from that kind of experience, and being able to really build something, and not just write a sentence and see what happens, but have an intention and a, a dedicated process to making art.

And I think it's going to be really helpful for more people to see the kind of art that Nettrice makes and hear some of that description of how she does it.

CINDY COHN
Yeah. I think so too. And I think the thing that just shines clear is that you can have all the tools, but you need the artist. And if you don't have the artist with their knowledge and their eye and their vision, then you're not really creating art with this. You may be creating something, something you could use, but you know, there's just no replacing the artist, even with the fanciest of tools.

JASON KELLEY
I keep coming back to the term that, uh, was applied to me often when I was younger, which was “script kitty,” because I never learned how to program, but I was very good at finding some code and using it. And I think that a lot of people think that's the only thing that generative AI lets you do.

And it's clear that if you have the talent and the, and the resources and the experience, you can do way more. And that's what Nettrice can show people. I hope more people come away from this conversation thinking like, I have to jump onto this now because I'm really excited to do exactly the kinds of things that she's doing.

CINDY COHN
Yeah, you know, she made a piece of generative art every day for a year, right? I mean, first of all, she comes from an art background, but then, you know, you've got to really dive in, and I think that cool things can come out of it.

The other thing I really liked was her recognition that so much of our, our culture and our society and the things that we love about our world comes from, you know, people on the margins making do and making art with what they have.

And I love the image of gumbo as a thing that comes out of cultures that don't have access to the finest cuts of meat and seafood and instead build something else, and she paired that with an image of Sashiko stitching in Japan, which came out of people trying to think about how to make their clothes last longer and make them stronger. And this gorgeous art form came out of it.

And how we can think of today's tools, whether they're AI or, or others as another medium in which we can begin to make things a beauty or things that are useful out of, you know, maybe the dribs of drabs of something that was built for a corporate purpose.

JASON KELLEY
That's exactly right. And I also loved that. And I think we've discussed this before at EFF many times, but the comparison of the sort of generative AI tools to hip hop and to other forms of remix art, which I think probably a lot of people have made that connection, but I think it's, it's worth saying it again and again, because it is, it is such a, a sort of clear through line into those kinds of techniques and those kinds of art forms.

CINDY COHN
Yeah. And I think that, you know, from EFF's policy perspective, you know, one of the reasons that we stand up for fair use and think that it's so important is the recognition that arts like collage and like using generative AI, you know, they're not going to thrive if, if our model of how we control or monetize them is based on charging for every single little piece.

That's going to limit, just as it limited in hip hop, it's going to limit what kind of art we can get. And so that doesn't mean that we just shrug our shoulders and don't, you know, and say, forget it, artists, you're never going to be paid again.

JASON KELLEY
I guess we’re just never going to have hip hop or

CINDY COHN
Or the other side, which is we need to find a way, you know, we, we, there are lots of ways in which we compensate people for creation that aren't tied to individual control of individual artifacts. And, and I think in this age of AI, but in previous images as well, like the failure for us to look to those things and to embrace them, has real impacts for our culture and society.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.

In this episode, you heard Xena's Kiss slash Madea's Kiss by MWIC and Lost Track by Airtone featuring MWIC. You can find links to their music in our episode notes or on our website at EFF.org slash podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

Podcast Episode: Chronicling Online Communities

Par : Josh Richman
21 mai 2024 à 03:08

From Napster to YouTube, some of the most important and controversial uses of the internet have been about building community: connecting people all over the world who share similar interests, tastes, views, and concerns. Big corporations try to co-opt and control these communities, and politicians often promote scary narratives about technology’s dangerous influences, but users have pushed back against monopoly and rhetoric to find new ways to connect with each other.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Alex Winter is a leading documentarian of the evolution of internet communities. He joins EFF’s Cindy Cohn and Jason Kelley to discuss the harms of behavioral advertising, what algorithms can and can’t be blamed for, and promoting the kind of digital literacy that can bring about a better internet—and a better world—for all of us. 

In this episode you’ll learn about: 

  • Debunking the monopolistic myth that communicating and sharing data is theft. 
  • Demystifying artificial intelligence so that it’s no longer a “black box” impervious to improvement. 
  • Decentralizing and democratizing the internet so more, diverse people can push technology, online communities, and our world forward. 
  • Finding a nuanced balance between free speech and harm mitigation in social media. 
  • Breaking corporations’ addiction to advertising revenue derived from promoting disinformation. 

Alex Winter is a director, writer and actor who has worked across film, television and theater. Best known on screen for “Bill & Ted’s Excellent Adventure” (1989) and its sequels as well as “The Lost Boys” (1987), “Destroy All Neighbors” (2024) and other films, he has directed documentaries including “Downloaded” (2013) about the Napster revolution; “Deep Web” (2015) about the online black market Silk Road and the trial of its creator Ross Ulbricht; “Trust Machine” (2018) about the rise of bitcoin and the blockchain; and “The YouTube Effect” (2022). He also has directed critically acclaimed documentaries about musician Frank Zappa and about the Panama Papers, the biggest global corruption scandal in history and the journalists who worked in secret and at great risk to break the story.   

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

ALEX WINTER
I think that people keep trying to separate the Internet from any other social community or just society, period. And I think that's very dangerous because I think that it allows them to be complacent and to allow these companies to get more powerful and to have more control and they're disseminating all of our information. Like, that's where all of our news, all of how anyone understands what's going on on the planet. 

And I think that's the problem, is I don't think we can afford to separate those things. We have to understand that it's part of society and deal with making a better world, which means we have to make a better internet.

CINDY COHN
That’s Alex Winter. He’s a documentary filmmaker who is also a deep geek.  He’s made a series of films that chronicle the pressing issues in our digital age.  But you may also know him as William S. Preston, Esquire - aka Bill of the Bill and Ted movies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet. 

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. You know, at EFF we spend a lot of time pointing out the way things could go wrong – and then of course  jumping in to fight when they DO go wrong. But this show is about envisioning – and hopefully helping create – a better future.

JASON KELLEY
Our guest today, Alex Winter, is an actor and director and producer who has been working in show business for most of his life. But as Cindy mentioned, in the past decade or so he has become a sort of chronicler of our digital age with his documentary films. In 2013, Downloaded covered the rise and fall, and lasting impact, of Napster. 2015’s Deep Web – 

CINDY COHN
Where I was proud to be a talking head, by the way. 

JASON KELLEY
– is about the dark web and the trial of Ross Ulbricht who created the darknet market the Silk Road. And 2018’s Trust Machine was about blockchain and the evolution of cryptocurrency. And then most recently, The YouTube Effect looks at the history of the video site and its potentially dangerous but also beneficial impact on the world. That’s not to mention his documentaries on The Panama Papers and Frank Zappa. 

CINDY COHN
Like I said in the intro, looking back on the documentaries you’ve made over the past decade or so, I was struck with the thought that you’ve really become this chronicler of our digital age – you know, capturing some of the biggest online issues, or even shining a light a bit on some of the corners of the internet that people like me might live in, but others might not see so much. . Where does that impulse come from you?

ALEX WINTER
I think partly my age. I came up, obviously, before the digital revolution took root, and was doing a lot of work around the early days of CGI and had a lot of friends in that space. I got my first computer probably in ‘82 when I was in college, and got my first Mac in ‘83, got online by ‘84, dial-up era and was very taken with the nascent online communities at that time, the BBS and Usenet era. I was very active in those spaces. And I'm not at all a hacker, I was an artist and I was more invested in the spaces in that way, which a lot of artists were in the eighties and into the nineties, even before the web.

So I was just very taken with the birth of internet based communities and the fact that it was such a democratized space and I mean that, you know, literally – that it was such an interesting mix of people from around the world who felt free to speak about whatever topics they were interested in, there were these incredible people from around the world who were talking about politics and art and everything  in extremely a robust way.

But I also, um, It really seemed clear to me that this was the beginning of something, and so my interest from the doc side has always been charting the internet in terms of community, and what the impact of that community is on different things, either political or whatever. And that's why my first doc was about Napster, because, you know, fast forward to 1998, which for many people is ancient history, but for us was the future.

And you're still in a modem dial up era and you now have an online community that has over a hundred million people on it in real time around the world who could search each other's hard drives and communicate.  What made me, I think, want to make docs was Napster was the beginning of realizing this disparity between the media or the news or the public's perception of what the internet was and what my experience was.

Where Sean Fanning was kind of being tarred as this pirate and criminal. And while there were obviously ethical considerations with Napster in terms of the  distribution of music, that was not my experience. My experience was this incredibly robust community and that had extreme validity and significance in sort of human scale.

And that's, I think, what really prompted me to start telling stories in this space. I think if anyone's interested in doing anything, including what you all do there, it's because you feel like someone else isn't saying what you want to be said, right? And so you're like, well, I better say it because no one else is saying it. So I think that was the inspiration for me to spend more time in this space telling stories here.

CINDY COHN
That's great. I mean, I do, and the stuff I hear in this is that, you know, first of all, the internet kind of erased distance so you could talk to people all over the world from this device in your home or in one place. And that people were really building community. 

And I also hear this, in terms of Napster, this huge disconnect between the kind of business model view of music, and music fan’s views of music. One of the most amazing things for me was realizing that I could find somebody who had a couple of songs that I really liked and then look at everything else they liked. And it challenged this idea that only kind of professional music critics who have a platform can suggest music to you and opened up a world, like literally felt like something just like a dam broke, and it opened up a world to music. It sounds like that was your experience as well.

ALEX WINTER
It was, and I think that really aptly describes the, the almost addictive fascination that people had with Napster and the confusion, even retrospectively, that that addiction came from theft, from this desire to steal in large quantities. I mean obviously you had kids in college dorm rooms pulling down gigabytes of music but the pull, the attraction to Napster was exactly what you just said – like I would find friends in Japan and Africa and Eastern Europe who had some weird like Coltrane bootleg that I'd never heard and then I was like, oh, what else do they have? And then here's what I have, and I have a very eclectic music collection. 

Then you start talking about art then you start talking about politics because it was a very robust forum So everyone was talking to each other. So it really was community and I think that gets lost because the narrative wants to remain the narrative, in terms of gatekeepers, in terms of how capitalism works, and that power dynamic was so completely threatened by, by Napster that, you know, the wheels immediately cranked into gear to sort of create a narrative that was, if you use this, you're just a terrible human being. 

And of course what it created was the beginning of this kind of online rebellion where people before weren't probably, didn't think of themselves as technical, or even that interested in technology, were saying, well, I'm not this thing that you're saying I am, and now I'm really going to rebel against you. Now I'm really going to dive into this space. And I think that it actually created more people sort of entering online community and building online communities, because they didn't feel like they were understood or being adequately represented.

And that led all the way to the Arab Spring and Occupy, and so many other things that came up after that.

JASON KELLEY
The community's angle that you're talking about is probably really, I think, useful to our audience. Because I think they probably find themselves, I certainly find myself in a lot of the kinds of communities that you've covered. Which often makes me think, like, how is this guy inside my head?

How do you think about the sort of communities that you need to, or want to chronicle. I know you mentioned this disconnect between the way the media covers it and the actual community. But like, I'm wondering, what do you see now? Are there communities that you've missed the boat on covering?

Or things that you want to cover at this moment that just aren't getting the attention that you think they should?

ALEX WINTER
I honestly just follow the things that interest me the most. I don't particularly … look, because I don't see myself as a, you know, in brackets as a chronicler of anything. I'm not that self, you know, I have a more modest view of myself. So I really just respond to the things that I find interesting, that on two tracks, one that I'm personally being impacted by.

So I'm not really like an outsider viewing, like, what will I cover next or what topics should I address, but what's really impacting me personally, I was hugely invested in Napster. I mean, I was going into my office on weekends and powering every single computer up all weekend onto Napster for the better part of a year. I mean, Fanning laughed at me when I met him, but -

CINDY COHN  
Luckily, the statute of limitations may have run on that, that's good.

ALEX WINTER
Yeah, exactly. 

JASON KELLEY  
Yeah, I'm sure you're not alone.

ALEX WINTER
Yeah, but I mean as I told Don Ienner when I did the movie I was like I was like dude I'd already bought all this music like nine times over on vinyl, on cassette, on CD. I think I even had elcasets at one point. So the record industry still owes me money as far as I’m concerned.

CINDY COHN
I agree.

ALEX WINTER
But no, it was really a personal investment. Even, you know, my interest in the blockchain and Bitcoin, which I have mixed feelings about, I really tried to cover that almost more from a political angle. I was interested, same with DeepWeb in a way, but I was interested in how the sort of counter narrators were building online and how people were trying to create systems and spaces online once online became corporatized, which it really did as soon as the web appeared, what did people do in response to the corporatization of these spaces? 

And that's why I was covering Lowry Love's case in England, and eventually Barrett Brown's case, and then the Silk Road, which I was mostly interested in for the same reason as Napster, which was, who were these people, what were they talking about, what drew them to this space, because it was a very clunky, clumsy way to buy drugs, if that was really what you wanted to do, and Bitcoin is a terrible tool for crime, as everyone now, I think, knows, but didn't so well back then.

So what was really compelling people, and a lot of that was, again, it was Silk Road was very much like the sort of alt rec world of the early Usenet days. A lot of divergent voices and politics and, and things like that. 

So YouTube is different because it was, Gayle Ayn Hurd had approached me and asked me if I wanted to tackle this with her, the producer. And I'd been looking at Google, largely. And that was why I had a personal interest. And I've got three boys, all of whom came up in the YouTube generations. They all moved off of regular TV and onto their laptops at a certain point in their childhood, and just were on YouTube for everything.

So I wanted corporatization of the internet, about what was the societal impact of the fact that our, our largest online community, which is YouTube, is owned by arguably the largest corporation on the planet, which is also a monopoly, which is also a black box.

And what does that mean? What are the societal  implications of that? So that was the kind of motive there, but it still was looking at it as a community largely.

CINDY COHN
So the conceit of the show is that we're trying to fix the internet and I want to know, you've done a lot to shine these stories in different directions, but what does it look like if we get it right? What are the things that we will see if we build the kind of online communities that are better than I think the ones that are getting the most attention now.

ALEX WINTER
I think that, you know, I've spent the last two years since I made the film and up until very recently on the road, trying to answer that question for myself, really, because I don't believe I have the answer that I need to bestow upon the world. I have a lot of questions, yeah. I do have an opinion. 

But right now, I mean, I generally feel like many people do that we slept – I mean, you all didn't, but many people slept on the last 20 years, right? And so there's a kind of reckoning now because we let these corporations get away with murder, literally and figuratively. And I think that we're in a phase of debunking various myths, and I think that's going to take some time before we can actually even do the work to make the internet better. 

But I think, you know, I have a big problem, a large thesis that I had in making The YouTube Effect was to kind of debunk the theory of the rabbit hole and the algorithm as being some kind of all encompassing evil. Because I think, sort of like we're seeing in AI now with this rhetoric about AI is going to kill everybody. To me, those are very agenda based narratives. They convince the public that this is all beyond them, and they should just go back to their homes, and keep buying things and eating food, and ignore these thorny areas of which they have no expertise, and leave it to the experts.

And of course, that means the status quo is upheld. The corporations keep doing whatever they want and they have no oversight, which is what they want. Every time Sam Altman says, AI is going to kill the world, he's just saying, Open AI is a black box, please leave us alone and let us make lots of money and go away. And that's all that means. So I think that we have to start looking at the internet and technology as being run by people. There aren't even that many people running it, there's only a handful of people running the whole damn thing for the most part. They have agendas, they have motives, they have political affiliations, they have capitalist orientation.

So I think really being able to start looking at the internet in a much more specific way, I know that you all have been doing this for a long time, most people do not. So I think more of that, more calling people on the carpet, more specificity. 

The other thing that we're seeing, and again, I'm preaching to the choir here with EFF, but like any time the public or the government or the media wakes up to something that they're behind, their inclination of how to fix it is way wrong, right?

And so that's the other place that we're at right now, like with COSA and the DSA and the Section 230 reform discussions, and they're bananas. And you feel like you're screaming into a chasm, right? Because if you say these things, people treat you like you're some kind of lunatic. Like, what do you mean you don't want to turn off Section 230? That would solve everything! I'm like, it wouldn't, it would just break the internet! So I feel a little, you know, like a Cassandra, but you do feel like you're yowling into a void. 

And so I do think that it's going to take a minute to fix the internet. And I think that one of the things that I think we'll get there, I think the new generations are smarter, the stakes are higher for them. You know kids in school… Well, I don't think the internet or social media is necessarily bad for kids, like, full stopping. There's a lot of propaganda there, but I think that, you know, they don't want harms. They want a safer environment for themselves. They don't want to stop using these platforms. They just want them to work better. 

But what's happened in the last couple of years, I think is a good thing, is that people are breaking off and forming their own communities again, even kids, like even my teenagers started doing it during COVID. Even on Discord, they would create their own servers, no one could get on it but them. There was no danger of, like, being infiltrated by crazy people. All their friends were there. They could bring other friends in, they could talk about whatever issues they wanted to talk about. So there's a kind of return to, of kind of fractured or fragmented or smaller set of communities.

And I think if the internet continues to go that way, that's a good thing, right? That you don't have to be on Tik TOK or YouTube or whatever to find your people. And I think for grownups would be the silver lining of what happened with Twitter, with, you know, Elon Musk buying it and immediately turning it into a Nazi crash pad is that the average adult realized they didn't have to be there either, right? That they don't have to just use one place that the internet is filled with little communities that they could go to to talk to their friends. 

So I think we're back in this kind of Wild West like we almost were pre-web and at the beginning of the web and I think that's good.  But I do think there's an enormous amount of misinformation and some very bad policy all over the world that is going to cause a lot of harm.

CINDY COHN
I mean, that's kind of my challenge to you is once we've realized that things are broken, how do we evaluate all the people who are coming in and claiming that they have the fix? And you know, in The YouTube effect, you talked to Carrie Goldberg. She has a lot of passion.

I think she's wrong about the answer. She's, I think, done a very good job illuminating some of the problems, especially for specific communities, people facing domestic violence and doxing and things like that. But she's rushed to a really dangerous answer for the internet overall. 

So I guess my challenge is, how do we help people think critically about not just the problems, but the potential issues with solutions? You know, the TikTok bans are something that's going on across the country now, and it feels like the Napster days, right?

ALEX WINTER
Yeah, totally.

CINDY COHN
People have focused on a particular issue and used it to try to say, Oh, we're just going to ban this. And all the people who use this technology for all the things that are not even remotely related to the problem are going to be impacted by this “ban-first” strategy.

ALEX WINTER
Yeah. I mean, it's media literacy. It's digital literacy. One of the most despairing things for me making docs in this space is how much prejudice there is to making docs in this space. You know, people consider the internet, especially, you know, a huge swath of, because obviously the far right has their agenda, which is just to silence everybody they don't agree with, right? I mean, the left can do the same thing, but the right is very good at it.  

The left, where they make mistakes, or, you know, center to left, is that they're ignorant about how these technologies work, and so their solutions are wrong. We see that over and over. They have really good intentions, but the solutions are wrong, and they don't actually make sense to how these technologies work. We're seeing that in AI. That was an area that I was trying to do as much work as I could in during the The Hollywood strike to educate people about AI'because they were so completely misinformed and their fixes were not fixes. They were not effective and they would not be legally binding. And it was despairing only because it's kind of frowned upon to say anything about technology other than don't use it.

CINDY COHN
Yeah.

ALEX WINTER
Right? Like, even other documentaries are like the thesis is like, well, just, you know, tell your kids they can't be on, like, tell them to read more literature.

Right? And it just drives me crazy because I'm like, I'm a progressive lefty and my kids are all online and guess what? They still read books and like, play music and go outside. So it's this kind of very binary black or white attitude towards technology that like, ‘Oh, it's just bad. Why can't we go back to the days?’

CINDY COHN
And I think there's a false sense that if we just could turn back the clock pre internet, everything was perfect. Right? My friend Cory Doctorow talks about this, like how we need to build the great new world, not the good old world. And I think that's true even for, you know, Internet oldies like you and me who are thinking about maybe the 80s and 90s.

Like, I think we need to embrace where we are now and then build the better world forward. Now, I agree with you strongly about decentralization in smaller communities. As somebody who cares about free speech and privacy, I don't see a way to solve the free speech and privacy problems of the giant platforms.

We're not going to get better dictators. We need to get rid of the dictators and make a lot more smaller, not necessarily smaller, but different spaces, differently governed spaces. But I agree with you that there is this rush to kind of turn back the clock and I think we should try to turn it forward. And again, I kind of want to push you a little bit. What does the turning it forward world look like?

ALEX WINTER
I mean, I have really strong opinions about that. I mean, thankfully, my kids are very tech savvy, like any kid. And I pay attention to what they're doing, and I find it fascinating. And the thing about thinking backwards is that it's a losing proposition. Because the world will leave you behind.

Because the world's not going to go backwards. And the world is only going to go forward. And so you either have a say in what that looks like, or you don't. 

I think two things have to happen. One is media literacy and a sort of weakening of this narrative that it's all bad, so that more people, intelligent people, are getting involved in the future. I think that will help adults get immersed into new technologies and new communities and what's going on. I think at the same time that we have to be working harder to attack the tech monopolies. 

I think being involved as opposed to being, um, abstinent. is really, really important. Um, and I think more of that will happen with new generations, so uh, and because then your eyes and your ears are open, and you'll find new communities and, and the like, but at the same time we have to work much harder at um, uh, this idea that we're allowing the big tech to police themselves is just ludicrous, and there's still the world that we're in, and it just drives me crazy and Uh, you know, they have one agenda, which is profit, and they don't care about anything else, and, and power.

And I think that's the danger of AI. I mean, it's not the, we're not all gonna die by robots. It's just, it's just this sort of capitalist machine is just gonna roll along unchecked. That's the problem, and it will eat labor, and it will eat other companies, and that's the problem.

CINDY COHN  
I mean, I think that's one of the tricky parts about, you know, kind of the, the Sam Altman shift, right, from don't regulate us to please regulate us. Behind that, please regulate us is, you know, and we'll, we'll tell you what the regulations look like because we're the only ones, these giant gurus who can understand enough about it to figure out how to regulate us.

And I just think that's, you know, it's, it's important to recognize that it's a pivot, but I think you could get tricked into thinking that's actually better. And I don't actually think it is.

ALEX WINTER
It’s a 100 percent agenda based. I mean, it's not only not better, it's completely self serving. And I think that as long as we are following these people as opposed to leading them, we're going to have a problem.

CINDY COHN:
Absolutely.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Alex Winter about YouTube.

ALEX WINTER
There's a lot of information there that's of extreme value, medical, artistic,historical, political. In the film, we go to great length to show that Caleb Kane, who got kind of pulled into and, and radicalized, um, by the, the proliferation of far right, um, neo and even neo Nazi and nationalist, uh, white supremacist content, which is still proliferate on YouTube, um, because it really is not algorithm oriented, it’s business and incentive based, how he himself was unindoctrinated by ContraPoints, by Natalie Wynn's channel. 

And you have to understand that, you know, more teenagers watch YouTube than Netflix. Like, it is everything. Iit is by an order of magnitude, so much more of how they spend their time, um, consuming media than anything else. And they're watching their friends talk, they're watching political speakers talk, they're watching, you know, my son who's like, his various interests from photography to weightlifting to whatever, he's young. All of that's coming from YouTube. All of it.

And they're pretty good at discerning the crap from, you know, unless like now it's like a lot of the studies show you have to be generally predisposed to this kind of content to really go down, the sort of darker areas those younger people can be.

You know, I often say that the greatest solution to people who end up getting radicalized on YouTube is more YouTube. Right? Is to find the people on YouTube who are doing good. And I think that's one of the big misunderstandings about disinfo is that you can consume good sources. You just have to find them. And people are actually better at discerning truth from lies if that's really what they want to do as opposed to, like, I just want to get a wash in QAnon or whatever. 

I think YouTube started not necessarily with pure intentions, but I think that they did start with some good intentions in terms of intentionally democratizing the landscape and voices and allowing people in marginalized groups, and under autocratic governments. They allowed and they, and they promoted that content and they created the age of the democratized influencer.

That was intentional. And I would argue that they did a better job of that than my industry did. And I think my industry followed their lead. I think the diversity initiatives in Hollywood came after Hollywood, because Hollywood's Like everyone else is driven by money only and they were like, Oh my God, there are these giant trans and African and Chinese influencers that have huge audiences, we should start allowing more people to have a voice in our business too. Cause we'll make money off of them. But I think that now, YouTube has grown so big and so far beyond them, and it's making them so much money and they're so incentivized to promote disinformation, propaganda, sort of violent, um, content because it, it just makes so much money for them on the ad side, uh, that it's sort of a runaway train at this point.

CINDY COHN
One of the things that EFF has taken a stand on is about banning behavioral advertising. And I think one of the things you did in The YouTube Effect is kind of take a hard look at, you know, how, how big a role the algorithm is actually playing. And I think the movie kind of points that it's not as big a role as people who, uh, who want an easy answer to the problem are, are saying.

We've been thinking about this from the privacy perspective, and we decided that behavioral advertising was behind so many of the problems we had, and I wondered, um, how you think about that, because that is the kind of tracking and targeting that feeds some of those algorithms, but it does a lot more.

ALEX WINTER
Yeah, I think that there's absolutely no doubt for all the hue and cry that they can't moderate their content. And I think that we're beginning, again, this is an area you, you, that you, that EFF specifically specializes in. But I think in terms of the area of free speech, and what constitutes free speech as opposed to what they could actually be doing to mitigate harms is very nuanced.

And it serves them to say that it is not. That it's not nuanced and it's either, either they're going to be shackling free speech or they should be left alone to do whatever they want, which is make money off of advertising, a lot of which is harmful. So I think getting into the weeds on that is extremely important.

You know, a recent example was just how they stopped deplatforming all the Stop the Steal content, which they were doing very successfully. The just flat out  you know, uh, election 2020 election propaganda and, you know, and that gets people hurt. I mean, it can get people killed and it's not, it's really not hard to do, um, but they make more money if they allow this kind of rampant, aggressive, propagandized advertising as well as content on their platform.

I just think that we have to be looking at advertising and how it functions in a very granular way, because these are,  the whole thesis of YouTube, such as we had one, is that this is not about an algorithm, it's about a business model. 

These are business incentives, it's no different, I've been saying this everywhere, it's like, it's exactly the same as, as the, the Hurst and Pulitzer wars of the late 1800s, it's the same. It's just, we want to make money. We know what attracts eyeballs. We want to advertise and make money from ad revenue from pumping out this garbage because people eat it up. It's really similar to that. That doesn't require an algorithm. 

CINDY COHN
My dream is Alex Winter makes a movie that helps us evaluate all the things that people who are worried about the internet are jumping in to say that we ought to do, and helps give people that kind of evaluative  power, because we do see over and over again this rush to go to censorship, which, you know, is problematic, for free expression, but also just won't work, this kind of gliding over the idea that privacy has anything to do with online harms and that standing up for privacy will do anything.

I just feel like sometimes, this literacy place needs to be both about the problems and about critically thinking about the things that are being put forward as solutions.

ALEX WINTER
Yeah, I mean, I've been writing a lot about that for the last two years. I've written, I think, I don't know, countless op eds. And there are way smarter people than me, like you all and Cory Doctorow, writing about this like crazy. And I think all of that is having an impact. I think that we are building the building blocks of proper internet literacy are being set. 

CINDY COHN
Well I appreciate that you've got three kids who are, you know, healthy and happy using the internet because I think those stories get overlooked as well. Not that there aren't real harms. It's just that there's this baby with the bathwater kind of approach that we find in policymaking.

ALEX WINTER
Yeah, completely. So I think that people feel like their arms are being twisted. That they have to say these hyper negative things, or fall in line with these narratives. You know, a movie requires characters, right? And I would need a court case or something to follow to find the way in and I've always got my eyes on that. But I do think we're at it. We're at a kind of a critical point.

It's really funny because when I made this film I'm friends with a lot of different film critics. I've just been around a long time I like, you know reading good film criticism and one of them who I respect greatly was like I don't want to review your movie because I really didn't like it and I don't want to give you a really bad review.

And I said, well, why didn't you like it? It's like, because I did just didn't like your perspective. And I was like, well, what didn't you like about my replicas? Like, well, you just weren't hard enough on YouTube. Like you, you didn't just come right out and say, they're just terrible and no one should be using it.

And I was like, You're the problem. and here's so much of that, um, that I feel like there is a, uh, you know, there's a bias that is going to take time to overcome. No matter what anyone says or whatever film anyone makes, there's just, we just have to kind of keep chipping away at it.

JASON KELLEY
Well, it's a shame we didn't get a chance to talk to him about Frank Zappa. But what we did talk to him about was probably more interesting to our audience. The thing that stood out to me was the way he sees these technologies and sort of focuses his documentaries on the communities that they facilitate.

And that was just sort of a, I think, useful way to think about, you know, everything from the deep web to blockchain to YouTube. To Napster, just like he sees these as building communities and those communities are not necessarily good or bad, but they have some really positive elements and that led him to this really interesting idea of, of a future of smaller communities, which I think, I think we all agree with.

Does that sound sort of like what you pulled away from the conversation, Cindy?

CINDY COHN
I think that's right. And I also think he was really smart at noticing the difference between what it was like to be inside some of those communities and how they got portrayed in broader society. And pointing out that when corporate interests, who were the copyright interests, saw what was happening on Napster, they very quickly put together a narrative that everybody was pirates, that was very different than how it felt to be inside that community and having access to all of that information and that disconnect, you know, what happens when the people who control our broader societal conversation, who are often corporate interests with their own commercial interests at heart.

And what it's like to be inside the communities is what connected the Silk Road story with the Napster story. And in some ways YouTube is interesting because it's actually gigantic. It's not a little corner of the internet, but yet, I think he's trying to lift up, you know, both the issues that we see in YouTube that are problematic, but also all the other things inside YouTube that are not problematic and as he pointed out in the story about Caleb Cain, you know, can be part of the solution to pulling people out of the harms. 

So I really appreciate this focus. I think it really hearkens back to, you know, one of the coolest things about the internet when it first came along was this idea that we could build communities free of distance and outside of the corporate spaces.

JASON KELLEY
Yeah. And the point you're making about his recognition of. Who gets to decide what's to blame, I think leads us right to the conversation around YouTube, which is it's easy to blame the algorithm when what's actually driving a lot of the problems we see with the site are corporate interests and engagement with the kind of content that gets people riled up and also makes a lot of money.

And I just love that he's able to sort of parse out these nuances in a way that surprisingly few people do, um, you know, across media and journalism and certainly in unfortunately government.

CINDY COHN
Yeah, and I think that, you know, it's, it's fun to have a conversation with somebody who kind of gets it at this level about the problems with, and he, you know, name checked issues that EFF has been working on for a long time, whether that's COSA or Section 230 or algorithmic issues. About how wrongheaded the solutions are and how it kind of drives it.

I appreciate that it kind of drives him crazy in the way it drives me crazy that once you've articulated the harms, people seem to rush towards solutions, or at least are pushed towards solutions that are not getting out of this corporate control, but rather in some ways putting us deeper in that.

And he's already seeing that in the AI push for regulation. I think he's exactly right about that. I don't know if I convinced him to make his next movie about all of these solutions and how to evaluate them. I'll have to keep trying. He may not, that may not be where he gets his inspiration.

JASON KELLEY
We'll see, I mean, at least if nothing else, EFF is in many of the documentaries that he has made and my guess is that will continue to be a voice of reason in the ones he makes in the future.

CINDY COHN
I really appreciate that Alex has taken his skills and talents and platforms to really lift up the kind of ordinary people who are finding community online and help us find ways to keep that part, and even lift it up as we move into the future.

JASON KELLEY

Thanks for joining us for this episode of how to fix the internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. 

In this episode you heard Perspectives by J.Lang featuring Sackjo22 and Admiral Bob 

You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

I hope you’ll join us again soon. I’m Jason Kelley.

CINDY
And I’m Cindy Cohn.

❌
❌