*This interview has been edited for length and clarity.
Rebecca MacKinnon is Vice President, Global Advocacy at the Wikimedia Foundation, the non-profit that hosts Wikipedia. Author of Consent of the Networked: The Worldwide Struggle For Internet Freedom (2012), she is co-founder of the citizen media network Global Voices, and founding director of Ranking Digital Rights, a research and advocacy program at New America. From 1998-2004 she was CNN’s Bureau Chief in Beijing and Tokyo. She has taught at the University of Hong Kong and the University of Pennsylvania, and held fellowships at Harvard, Princeton, and the University of California. She holds an AB magna cum laude in Government from Harvard and was a Fulbright scholar in Taiwan.
David Greene: Can you introduce yourself and give us a bit of your background?
My name is Rebecca MacKinnon, I am presently the Vice President for Global Advocacy at the Wikimedia Foundation, but I’ve worn quite a number of hats working in the digital rights space for almost twenty years. I was co-founder of Global Voices, which at the time we called it International Bloggers’ Network, which is about to hit its twentieth anniversary. I was one of the founding board members of the Global Networking Initiative, GNI. I wrote a book called “Consent of the Networked: The Worldwide Struggle for Internet Freedom,” which came out more than a decade ago. It didn’t sell very well, but apparently it gets assigned in classes still so I still hear about it. I was also a founding member of Ranking Digital Rights, which is a ranking of the big tech companies and the biggest telecommunications companies on the extent to which they are or are not protecting their users’ freedom of expression and privacy. I left that in 2021 and ended up with the Wikimedia Foundation, and it’s never a dull moment!
Greene: And you were a journalist before all of this, right?
Yes, I worked for CNN for twelve years in Beijing for nine years where I ended up Bureau Chief and Correspondent, and in Tokyo for almost three years where I was also Bureau Chief and Correspondent. That’s also where I first experienced the magic of the global internet in a journalistic context and also experienced the internet arriving in China and the government immediately trying to figure out both how to take advantage of it economically but also to control it enough that the Communist Party would not lose power.
Greene: At what point did it become apparent that the internet would bring both benefits and threats to freedom of expression?
At the beginning I think the media, industry, policymakers, kind of everybody, assumed—you know, this is like in 1995 when the internet first showed up commercially in China—everybody assumed “there’s no way the Chinese Communist Party can survive this,” and we were all a bit naive. And our reporting ended up influencing naive policies in that regard. And perhaps naive understanding of things like Facebook revolutions and things like that in the activism world. It really began to be apparent just how authoritarianism was adapting to the internet and starting to adapt the internet. And how China was really Exhibit A for how that was playing out and could play out globally. That became really apparent in the mid-to-late 2000s as I was studying Chinese blogging communities and how the government was controlling private companies, private platforms, to carry out censorship and surveillance work.
Greene: And it didn’t stop with China, did it?
It sure didn’t! And in the book I wrote I only had a chapter on China and talked about how if the trajectory the Western democratic world was on just kind of continued in a straight line we were going to go more in China’s direction unless policymakers, the private sector, and everyone else took responsibility for making sure that the internet would actually support human rights.
Greene: It’s easy to talk about authoritarian threats, but we see some of the same concerns in democratic countries as well.
We’re all just one bad election away from tyranny, aren’t we? This is again why when we’re talking to lawmakers, not only do we ask them to apply a Wikipedia test—if this law is going to break Wikipedia, then it’s a bad law—but also, how will this stand up to a bad election? If you think a law is going to be good for protecting children or fighting disinformation under the current dominant political paradigm, what happens if someone who has no respect for the rule of law, no respect for democratic institutions or processes ends up in power? And what will they do with that law?
Greene: This happens so much within disinformation, for example, and I always think of it in terms of, what power are we giving the state? Is it a good thing that the state has this power? Well, let’s switch things up and go to the basics. What does free speech mean to you?
People talk about is it free as in speech? Is it free as in beer? What does “free” mean? I am very much in the camp that freedom of expression needs to be considered in the context of human rights. So my free speech does not give me freedom to advocate for a pogrom against the neighboring neighborhood. That is violating the rights of other people. And I actually think that Article 19 of the Declaration of Human Rights—it may not be perfect—but it gives us a really good framework to think about what is the context of freedom of expression or free speech as situated with other rights? And how do we make sure that, if there are going to be limits on freedom of expression to prevent me from calling for a pogrom of my neighbors, then the limitations placed on my speech are necessary and proportionate and cannot be abused? And therefore it’s very important that whoever is imposing those limits is being held accountable, that their actions are sufficiently transparent, and that any entity’s actions to limit my speech—whether it’s a government or an internet service provider—that I understand who has the power to limit my speech or limit what I can know or limit what I can access, so that I can even know what I don’t know! So that I know what is being kept from me. I also know who has the authority to restrict my speech, under what circumstances, so that I know what I can do to hold them accountable. That is the essence of freedom of speech within human rights and where power is held appropriately accountable.
Greene: How do you think about the ways that your speech might harm people?
You can think of it in terms of the other rights in the Universal Declaration. There’s the right to privacy. There’s the right to assembly. There’s the right to life! So for me to advocate for people in that building over there to go kill people in that other building, that’s violating a number of rights that I should not be able to violate. But what’s complicated, when we’re talking about rules and rights and laws and enforcement of laws and governance online, is that we somehow think it can be more straightforward and black and white than governance in the physical world is. So what do we consider to be appropriate law enforcement in the city of San Francisco? It’s a hot topic! And reasonable people of a whole variety of backgrounds reasonably disagree and will never agree! So you can’t just fix crime in San Francisco the way you fix the television. And nobody in their right mind would expect that you should expect that, right? But somehow in the internet space there’s so much policy conversation around making the internet safe for children. But nobody’s running around saying, “let’s make San Francisco safe for children in the same way.” Because they know that if you want San Francisco to be 100% safe for children, you’re going to be Pyongyang, North Korea!
Greene: Do you think that’s because with technology some people just feel like there’s this techno-solutionism?
Yeah, there’s this magical thinking. I have family members who think that because I can fix something with their tech settings I can perform magic. I think because it’s new, because it’s a little bit mystifying for many people, and because I think we’re still in the very early stages of people thinking about governance of digital spaces and digital activities as an extension of real world activities. And they’re thinking more about, okay, it’s like a car we need to put seatbelts on.
Greene: I’ve heard that from regulators many times. Does the fact that the internet is speech, does that make it different from cars?
Yeah, although increasingly cars are becoming more like the internet! Because a car is essentially a smartphone that can also be a very lethal weapon. And it’s also a surveillance device, it’s also increasingly a device that is a conduit for speech. So actually it’s going the other way!
Greene: I want to talk about misinformation a bit. You’re at Wikimedia, and so, independent of any concern people have about misinformation, Wikipedia is the product and its goal is to be accurate. What do we do with the “problem” of misinformation?
Well, I think it’s important to be clear about what is misinformation and what is disinformation. And deal with them—I mean they overlap, the dividing line can be blurry—but, nonetheless, it’s important to think about both in somewhat different ways. Misinformation being inaccurate information that is not necessarily being spread maliciously with intent to mislead. It might just be, you know, your aunt seeing something on Facebook and being like, “Wow, that’s crazy. I’m going to share it with 25 friends.” And not realizing that they’re misinformed. Whereas disinformation is when someone is spreading lies for a purpose. Whether it’s in an information warfare context where one party in a conflict is trying to convince a population of something about their own government which is false, or whatever it is. Or misinformation about a human rights activist and, say, an affair they allegedly had and why they deserve whatever fate they had… you know, just for example. That’s disinformation. And at the Wikimedia Foundation—just to get a little into the weeds because I think it helps us think about these problems—Wikipedia is a platform whose content is not written by staff of the Wikimedia Foundation. It’s all contributed by volunteers, anybody can be a volunteer. They can go on Wikipedia and contribute to a page or create a page. Whether that content stays, of course, depends on whether the content they’ve added adheres to what constitutes well-sourced, encyclopedic content. There’s a whole hierarchy of people whose job it is to remove content that does not fit the criteria. And one could talk about that for several podcasts. But that process right there is, of course, working to counter misinformation. Because anything that’s not well-sourced—and they have rules about what is a reliable source and what isn’t—will be taken down. So the volunteer Wikipedians, kind of through their daily process of editing and enforcing rules, are working to eliminate as much misinformation as possible. Of course, it’s not perfect.
Greene: [laughing] What do you mean it’s not perfect? It must be perfect!
What is true is a matter of dispute even between scientific journals or credible news sources, or what have you. So there’s lots of debates and all those debates are in the history tab of every page which are public, about what source is credible and what the facts are, etc. So this is kind of the self-cleaning oven that’s dealing with misinformation. The human hive mind that’s dealing with this. Disinformation is harder because you have a well-funded state actor who not only may be encouraging people—not necessary people who are employed by that actor themselves, but people who are kind of nationalistic and supporters of that government or politician or people who are just useful idiots—to go on and edit Wikipedia to promote certain narratives. But that’s kind of the least of it. You also, of course, have threats, credible, physical threats against editors who are trying to delete the disinformation and staff of the Foundation who are trying to support editors in dealing with investigating and identifying what is actually a disinformation campaign and supports volunteers in addressing that, sometimes with legal support, sometimes with technical support and other support. But people are in jail in one country in particular right now because they were fighting disinformation on the projects in their language. In Belarus, we had people, volunteers, who were jailed for the same reason. We have people who are under threat in Russia, and you have governments who will say, “Wikipedia contains disinformation about our, for example, Special Military Exercise in Ukraine because they’re calling it ‘an invasion’ which is disinformation, so therefore they’re breaking the law against disinformation so we have to threaten them.” So the disinformation piece—fighting it can become very dangerous.
Greene: What I hear is there are threats to freedom of expression in efforts to fight disinformation and, certainly in terms of state actors, those might be malicious. Are there any well-meaning efforts to fight disinformation that also bring serious threats to freedom of expression?
Yeah, the people who say, “Okay, we should just require the platforms to remove all content that is anything from COVID disinformation to certain images that might falsely present… you know, deepfake images, etc.” Content-focused efforts to fight misinformation and disinformation will result in over-censorship because you can almost never get all the nuance and context right. Humor, satire, critique, scientific reporting on a topic or about disinformation itself or about how so-and-so perpetrated disinformation on X, Y, Z… you have to actually talk about it. But if the platform is required to censor the disinformation you can’t even use that platform to call out disinformation, right? So content-based efforts to fight disinformation go badly and get weaponized.
Greene: And, as the US Supreme Court has said, there’s actually some social value to the little white lie.
There can be. There can be. And, again, there’s so many topics on which reasonable people disagree about what the truth is. And if you start saying that certain types of misinformation or disinformation are illegal, you can quickly have a situation where the government is becoming arbiter of the truth in ways that can be very dangerous. Which brings us back to… we’re one bad election away from tyranny.
Greene: In your past at Ranking Digital Rights you looked more at the big corporate actors rather than State actors. How do you see them in terms of freedom of expression—they have their own freedom of expression rights, but there’s also their users—what does that interplay look to you?
Especially in relation to the disinformation thing, when I was at Ranking Digital Rights we put out a report that also related to regulation. When we’re trying to hold these companies accountable, whether we’re civil society or government, what’s the appropriate approach? The title of the report was, “It’s Not the Content, it’s the Business Model.” Because the issue is not about the fact that, oh, something bad appears on Facebook. It’s how it’s being targeted, how it’s being amplified, how that speech and the engagement around it is being monetized, that’s where most of the harm takes place. And here’s where privacy law would be rather helpful! But no, instead we go after Section 230. We could do a whole other podcast on that, but… I digress.
I think this is where bringing in international human rights law around freedom of expression is really helpful. Because the US constitutional law, the First Amendment, doesn’t really apply to companies. It just protects the companies from government regulation of their speech. Whereas international human rights law does apply to companies. There’s this framework, The UN Guiding Principles on Business and Human Rights, where nation-states have the ultimate responsibility—duty—to protect human rights, but companies and platforms, whether you’re a nonprofit or a for-profit, have a responsibility to respect human rights. And everybody has a responsibility to provide remedy, redress. So in that context, of course, it doesn’t contradict the First Amendment at all, but it sort of adds another layer to corporate accountability that can be used in a number of ways. And that is being used more actively in the European context. But Article 19 is not just about your freedom of speech, it’s also your freedom of access to information, which is part of it, and your freedom to form an opinion without interference. Which means that if you are being manipulated and you don’t even know it—because you are on this platform that’s monetizing people’s ability to manipulate you—that’s a violation of your freedom of expression under international law. And that’s a problem that companies, platforms of any kind—including if Wikimedia were to allow that to happen, which they don’t—anyone should be held accountable for.
Greene: Just in terms of the role of the State in this interplay, because you could say that companies should operate within a human rights framing, but then we see different approaches around the world. Is it okay or is it too much power for the state to require them to do that?
Here’s the problem. If the States were perfect in achieving their human rights duties, then we wouldn’t have a problem and we could totally trust states to regulate companies in our interest and in ways that protect our human rights. But there is no such state. There are some that are further away on the spectrum than others, but they’re all on a spectrum and nobody is at that position of utopia, and they will never get there. And so, given that all states in large ways or small, in different ways, are making demands of internet platforms, companies generally, that reasonable numbers of people believe violates their rights, then we need accountability. And that holding the state accountable for what it’s demanding of the private sector, making sure that’s transparent and that the state does not have absolute power is of utmost importance. And when you have situations where a government is just blatantly violating rights, and a company—even a well-meaning company that wants to do the right thing— is just stuck between a rock and a hard place. You can be really transparent about the fact that you’re complying with bad law, but you’re stuck in this place where if you refuse to comply then your employees go to jail. Or other bad things happen. And so what do you do other than just try and let people know? And then the state tells you, “Oh, you can’t tell people because that's a state secret.” So what do you do then? Do you just stop operating? So one can be somewhat sympathetic. Some of the corporate accountability rhetoric has gone a little overboard in not recognizing that if the state’s are failing to do their job, we have a problem.
Greene: What’s the role of either the State or the companies if you have two people and one person is making it hard for the other to speak? Whether through heckling or just creating an environment where the other person doesn’t feel safe speaking? Is there a role for either the State or the companies where you have two peoples’ speech rights butting up against each other?
We have this in private physical spaces all the time. If you’re at a comedy show and somebody gets up and starts threatening the stand-up comedian, obviously, security throws them out! I think in physical space we have some general ideas about that, that work okay. And that we can apply in virtual space, although it’s very contextual and, again, somebody has to make a decision—whose speech is more important than whose safety? Choices are going to be made. They’re not always going to be, in hindsight, the right choices, because sometimes you have to act really quickly and you don’t know if somebody’s life is in danger or not. Or how dangerous is this person speaking? But you have to err on the side of protecting life and limb. And then you might have realized at the end of the day that wasn’t the right choice. But are you being transparent about what your processes are—what you’re going to do under what circumstances? So people know, okay, well this is really predictable. They said they were going to x if I did y, and I did y and they did indeed take action, and if I think that they unfairly took action then there’s some way of appealing. That it’s not just completely opaque and unaccountable.
This is a very overly simplistic description of very complex problems, but I’m now working at a platform. Yes, it’s a nonprofit, public interest platform, but our Trust and Safety team are working with volunteers who are enforcing rules and every day—well, I don’t know if it’s every day because they’re the Trust and Safety team so they don’t tell me exactly what’s going on—but there are frequent decisions around people’s safety. And what enables the volunteer community to basically both trust each other enough, and trust the platform operator enough, for the whole thing not to collapse due to mistrust and anger is that you’re being open and transparent enough about what you’re doing and why you’re doing it so that if you did make a mistake there’s a way to address it and be honest about it.
Greene: So at least at Wikimedia you have the overriding value of truthfulness. At another platform should they value wanting to preserve places for people who otherwise wouldn’t have places to speak? People who are historically or culturally don’t have the opportunities to speak. How should they handle these instances of people being heckled down or shouted down off of a site? From your perspective, how should they respond to that? Should they make an effort to preserve these spaces?
This is where I think in Silicon Valley in particular you often hear this thing that the technology is neutral— “we treat everybody the same.” —
Greene: And it’s not true.
Oh, of course it’s not true! But that’s the rhetoric. But that is held up as being “the right thing.” But that’s like saying, “Okay, we’re going to administer public housing in a way” — and it’s not a perfect comparison—being completely blind to the context and the socio-economic and political realities of the human beings that you are taking action upon is sort of like, again, if you’re operating a public housing system, or whatever, and you’re not taking into account at all the socio-economic backgrounds or ethnic backgrounds of people for whom you’re making decisions, you’re going to be perpetuating and, most likely, amplifying social injustice. So people who run public housing or universities and so on are quite familiar with this notion that being neutral is actually not neutral. It’s perpetuating existing social, economic, and political power imbalances. And we found that’s absolutely the case with social media claiming to be neutral. And the vulnerable people end up losing out. That’s what the research has shown and the activism has shown.
And, you know, in the Wikimedia community there are debates about this. There are people who have been editing for a long time who say, “we have to be neutral.” But on the other hand—what’s very clear—is the greater diversity of viewpoints and backgrounds and languages and genres, etc of the people contributing to an article on a given topic the better it is. So if you want something to actually have integrity, you can’t just have one type of person working on it. And so there’s all kinds of reasons why it’s important as a platform operator that we do everything we can to ensure that this is a welcoming space for people of all backgrounds. That people who are under threat feel safe contributing to the platforms and not just rich white guys in Northern Europe.
Greene: And at the same time we can’t expect them to be more perfect than the real world, also, right?
Well, yeah, but you do have to recognize that the real world is the real world and there are these power dynamics going on that you have to take into account and you can decide to amplify them by pretending they don’t exist, or you can work actively to compensate in a manner that is consistent with human rights standards.
Greene: Okay, one more question for you. Who is your free speech hero and why?
Wow, that’s a good question, nobody has asked me that before in that very direct way. I think I really have to say sort of a group of people who really set me on the path of caring deeply for the rest of my life about free speech. Those are the people in China, most of whom I met when I was a journalist there, who stood up to tell the truth despite tremendous threats like being jailed, or worse. And oftentimes the determination that I would witness from even very ordinary people that “I am right, and I need to say this. And I know I’m taking a risk, but I must do it.” And it’s because of my interactions with such people in my twenties when I was starting out as a journalist in China that set me on this path. And I am grateful to them all, including several who are no longer on this earth including Liu Xiaobo, who received a Nobel prize when he was in jail before he died.