Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

EFF Statement on U.S. Supreme Court's Decision to Consider TikTok Ban

The TikTok ban itself and the DC Circuit's approval of it should be of great concern even to those who find TikTok undesirable or scary. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the U.S. has previously condemned globally.

The U.S. government should not be able to restrict speech—in this case by cutting off a tool used by 170 million Americans to receive information and communicate with the world—without proving with evidence that the tools are presently seriously harmful. But in this case, Congress has required and the DC Circuit approved TikTok’s forced divestiture based only upon fears of future potential harm. This greatly lowers well-established standards for restricting freedom of speech in the U.S. 

So we are pleased that the Supreme Court will take the case and will urge the justices to apply the appropriately demanding First Amendment scrutiny.

Speaking Freely: Winnie Kabintie

Winnie Kabintie is a journalist and Communications Specialist based in Nairobi, Kenya. As an award-winning youth media advocate, she is passionate about empowering young people with Media and Information Literacy skills, enabling them to critically engage with and shape the evolving digital  media landscape in meaningful ways.

Greene: To get us started, can you tell us what the term free expression means to you? 

I think it's the opportunity to speak in a language that you understand and speak about subjects of concern to you and to anybody who is affected or influenced by the subject of conversation. To me, it is the ability to communicate openly and share ideas or information without interference, control, or restrictions. 

As a journalist, it means having the freedom to report on matters affecting society and my work without censorship or limitations on where that information can be shared. Beyond individual expression, it is also about empowering communities to voice their concerns and highlight issues that impact their lives. Additionally, access to information is a vital component of freedom of expression, as it ensures people can make informed decisions and engage meaningfully in societal discourse because knowledge is power.

Greene: You mention the freedom to speak and to receive information in your language. How do you see that currently? Are language differences a big obstacle that you see currently? 

If I just look at my society—I like to contextualize things—we have Swahili, which is a national language, and we have English as the secondary official language. But when it comes to policies, when it comes to public engagement, we only see this happening in documents that are only written in English. This means when it comes to the public barazas (community gatherings) interpretation is led by a few individuals, which creates room for disinformation and misinformation. I believe the language barrier is an obstacle to freedom of speech. We've also seen it from the civil society dynamics, where you're going to engage the community but you don't speak the same language as them, then it becomes very difficult for you to engage them on the subject at hand. And if you have to use a translator, sometimes what happens is you're probably using a translator for whom their only advantage, or rather the only advantage they bring to the table, is the fact that they understand different languages. But they're not experts in the topic that you're discussing.

Greene: Why do you think the government only produces materials in English? Do you think part of that is because they want to limit who is able to understand them? Or is it just, are they lazy or they just disregard the other languages? 

In all fairness, I think it comes from the systematic approach on how things run. This has been the way of doing things, and it's easier to do it because translating some words from, for example, English to Swahili is very hard. And you see, as much as we speak Swahili in Kenya—and it's our national language—the kind of Swahili we speak is also very diluted or corrupted with English and Sheng—I like to call “ki-shenglish”. I know there were attempts to translate the new Kenyan Constitution, and they did translate some bits of the summarized copy, but even then it wasn’t the full Constitution. We don't even know how to say certain words in Swahili from English which makes it difficult to translate many things. So I think it's just an innocent omission. 

Greene: What makes you passionate about freedom of expression?

 As a journalist and youth media advocate, my passion for freedom of expression stems from its fundamental role in empowering individuals and communities to share their stories, voice their concerns, and drive meaningful change. Freedom of expression is not just about the right to speak—it’s about the ability to question, to challenge injustices, and to contribute to shaping a better society.

For me, freedom of expression is deeply personal as I like to question, interrogate and I am not just content with the status quo. As a journalist, I rely on this freedom to shed light on critical issues affecting society, to amplify marginalized voices, and to hold power to account. As a youth advocate, I’ve witnessed how freedom of expression enables young people to challenge stereotypes, demand accountability, and actively participate in shaping their future. We saw this during the recent Gen Z revolution in Kenya when youth took to the streets to reject the proposed Finance Bill.

Freedom of speech is also about access. It matters to me that people not only have the ability to speak freely, but also have the platforms to articulate their issues. You can have all the voice you need, but if you do not have the platforms, then it becomes nothing. So it's also recognizing that we need to create the right platforms to advance freedom of speech. These, in our case, include platforms like radio and social media platforms. 

So we need to ensure that we have connectivity to these platforms. For example, in the rural areas of our countries, there are some areas that are not even connected to the internet. They don't have the infrastructure including electricity. It then becomes difficult for those people to engage in digital media platforms where everybody is now engaging. I remember recently during the Reject Finance Bill process in Kenya, the political elite realized that they could leverage social media and meet with and engage the youth. I remember the President was summoned to an X-space and he showed up and there was dialogue with hundreds of young people. But what this meant was that the youth in rural Kenya who didn’t have access to the internet or X were left out of that national, historic conversation. That's why I say it's not just as simple as saying you are guaranteed freedom of expression by the Constitution. It's also how governments are ensuring that we have the channels to advance this right. 

Greene: Have you had a personal experience or any personal experiences that shaped how you feel about freedom of expression? Maybe a situation where you felt like it was being denied to you or someone close to you was in that situation?

At a personal level I believe that I am a product of speaking out and I try to use my voice to make an impact! There is also this one particular incident that stands out during my early career as a journalist. In 2014 I amplified a story from a video shared on facebook by writing a news article that was published on The Kenya Forum, which at the time was one of the two publications that were fully digital in the country covering news and feature articles.

The story, which was a case of gender based assault, gained traction drawing attention to the unfortunate incident that had seen a woman stripped naked allegedly for being “dressed indecently.” The public uproar sparked the famous #MyDressMyChoice protest in Kenya where women took to the streets countrywide to protest against sexual violence.

Greene: Wow. Do you have any other specific stories that you can tell about the time when you spoke up and you felt that it made a difference? Or maybe you spoke up, and there was some resistance to you speaking up? 

I've had many moments where I've spoken up and it's made a difference including the incident I shared in the previous question. But, on the other hand, I also had a moment where I did not speak out years ago, when a classmate in primary school was accused of theft. 

There was this girl once in class, she was caught with books that didn't belong to her and she was accused of stealing them. One of the books she had was my deskmate’s and I was there when she had borrowed it. So she was defending herself and told the teacher, “Winnie was there when I borrowed the book.” When the teacher asked me if this was true I just said, “I don't know.” That feedback was her last line of defense and the girl got expelled from school. So I’ve always wondered, if I'd said yes, would the teacher have been more lenient and realized that she had probably just borrowed the rest of the books as well? I was only eight years old at the time, but because of that, and how bad the outcome made me feel, I vowed to myself to always stand for the truth even when it’s unpopular with everyone else in the room. I would never look the other way in the face of an injustice or in the face of an issue that I can help resolve. I will never walk away in silence.

Greene: Have you kept to that since then? 

Absolutely.

Greene: Okay, I want to switch tracks a little bit. Do you feel there are situations where it's appropriate for government to limit someone's speech?

Yes, absolutely. In today’s era of disinformation and hate speech, it’s crucial to have legal frameworks that safeguard society. We live in a society where people, especially politicians, often make inflammatory statements to gain political mileage, and such remarks can lead to serious consequences, including civil unrest.

Kenya’s experience during the 2007-2008 elections is a powerful reminder of how harmful speech can escalate tensions and pit communities against each other. That period taught us the importance of being mindful of what leaders say, as their words have the power to unite or divide.

I firmly believe that governments must strike a balance between protecting freedom of speech and preventing harm. While everyone has the right to express themselves, that right ends where it begins to infringe on the rights and safety of others. It’s about ensuring that freedom of speech is exercised responsibly to maintain peace and harmony in society.

Greene: So what do we have to be careful about with giving the government the power to regulate speech? You mentioned hate speech can be hard to define. What's the risk of letting the government define that?

The risk is that the government may overstep its boundaries, as often happens. Another concern is the lack of consistent and standardized enforcement. For instance, someone with influence or connections within the government might escape accountability for their actions, while an activist doing the same thing could face arrest. This disparity in treatment highlights the risks of uneven application of the law and potential misuse of power.

Greene: Earlier you mentioned special concern for access to information. You mentioned children and you mentioned women. Both of those are groups of people where, at least in some places, someone else—not the government, but some other person—might control their access, right? I wonder if you could talk a little bit more about why it's so important to ensure access to information for those particular groups. 

I believe home is the foundational space where access to information and freedom of expression are nurtured. Families play a crucial role in cultivating these values, and it’s important for parents to be intentional about fostering an environment where open communication and access to information are encouraged. Parents have a responsibility to create opportunities for discussion within their households and beyond.

Outside the family, communities provide broader platforms for engagement. In Kenya, for example, public forums known as barazas serve as spaces where community members gather to discuss pressing issues, such as insecurity and public utilities, and to make decisions that impact the neighborhood. Ensuring that your household is represented in these forums is essential to staying informed and being part of decisions that directly affect you.

It’s equally important to help people understand the power of self-expression and active participation in decision-making spaces. By showing up and speaking out, individuals can contribute to meaningful change. Additionally, exposure to information and critical discussions is vital in today’s world, where misinformation and disinformation are prevalent. Families can address these challenges by having conversations at the dinner table, asking questions like, “Have you heard about this? What’s your understanding of misinformation? How can you avoid being misled online?”

By encouraging open dialogue and critical thinking in everyday interactions, we empower one another to navigate information responsibly and contribute to a more informed and engaged society.

Greene: Now, a question we ask everyone, who is your free speech hero? 

I have two. One is a Human Rights lawyer and a former member of Parliament Gitobu Imanyara.  He is one of the few people in Kenya who fought by blood and sweat, literally, for the freedom of speech and that of the press in Kenya. He will always be my hero when we talk about press freedom. We are one of the few countries in Africa that enjoys extreme freedoms around speech and press freedom and it’s thanks to people like him. 

The other is an activist named Boniface Mwangi. He’s a person who never shies away from speaking up. It doesn’t matter who you are or how dangerous it gets, Boni, as he is popularly known, will always be that person who calls out the government when things are going wrong. You’re driving on the wrong side of the traffic just because you’re a powerful person in government. He'll be the person who will not move his car and he’ll tell you to get back in your lane. I like that. I believe when we speak up we make things happen.

Greene: Anything else you want to add? 

I believe it’s time we truly recognize and understand the importance of freedom of expression and speech. Too often, these rights are mentioned casually or taken at face value, without deeper reflection. We need to start interrogating what free speech really means, the tools that enable it, and the ways in which this right can be infringed upon.

As someone passionate about community empowerment, I believe the key lies in educating people about these rights—what it looks like when they are fully exercised and what it means when they are violated and especially in today’s digital age. Only by raising awareness can we empower individuals to embrace these freedoms and advocate for better policies that protect and regulate them effectively. This understanding is essential for fostering informed, engaged communities that can demand accountability and meaningful change.

Speaking Freely: Prasanth Sugathan

Interviewer: David Greene

This interview has been edited for length and clarity.*

Prasanth Sugathan is Legal Director at Software Freedom Law Center, India. (SFLC.in). Prasanth is a lawyer with years of practice in the fields of technology law, intellectual property law, administrative law and constitutional law. He is an engineer turned lawyer and has worked closely with the Free Software community in India. He has appeared in many landmark cases before various Tribunals, High Courts and the Supreme Court of India. He has also deposed before Parliamentary Committees on issues related to the Information Technology Act and Net Neutrality.

David Greene: Why don’t you go ahead and introduce yourself. 

Sugathan: I am Prasanth Sugathan, I am the Legal Director at the Software Freedom Law Center, India. We are a nonprofit organization based out of New Delhi, started in the year 2010. So we’ve been working at this for 14 years now, working mostly in the area of protecting rights of citizens in the digital space in India. We do strategic litigation, policy work, trainings, and capacity building. Those are the areas that we work in. 

Greene: What was your career path? How did you end up at SFLC? 

That’s an interesting story. I am an engineer by training. Then I was interested in free software. I had a startup at one point and I did a law degree along with it. I got interested in free software and got into it full time. Because of this involvement with the free software community, the first time I think I got involved in something related to policy was when there was discussion around software patents. When the patent office came out with a patent manual and there was this discussion about how it could affect the free software community and startups. So that was one discussion I followed, I wrote about it, and one thing led to another and I was called to speak at a seminar in New Delhi. That’s where I met Eben and Mishi from the Software Freedom Law Center. That was before SFLC India was started, but then once Mishi started the organization I joined as a Counsel. It’s been a long relationship. 

Greene: Just in a personal sense, what does freedom of expression mean to you? 

Apart from being a fundamental right, as evident in all the human rights agreements we have, and in the Indian Constitution,  freedom of expression is the most basic aspect for a democratic nation. I mean without free speech you can not have a proper exchange of ideas, which is most important for a democracy. For any citizen to speak what they feel, to communicate their ideas, I think that is most important. As of now the internet is a medium which allows you to do that. So there definitely should be minimum restrictions from the government and other agencies in relation to the free exchange of ideas on this medium. 

Greene: Have you had any personal experiences with censorship that have sort of informed or influenced how you feel about free expression? 

When SFLC.IN was started in 2010 our major idea was to support the free software community. But then how we got involved in the debates on free speech and privacy on the internet was when in 2011 there were the IT Rules were introduced by the government as a draft for discussion and finally notified. This was on  regulation of intermediaries, these online platforms. This was secondary legislation based on the Information Technology Act (IT Act) in India, which is the parent law. So when these discussions happened we got involved in it and then one thing led to another. For example, there was a provision in the IT Act called Section 66-A which criminalized the sending of offensive messages through a computer or other communication devices. It was, ostensibly, introduced to protect women. And the irony was that two women were arrested under this law. That was the first arrest that happened, and it was a case of two women being arrested for the comments that they made about a leader who expired. 

This got us working on trying to talk to parliamentarians, trying to talk to other people about how we could maybe change this law. So there were various instances of content being taken down and people being arrested, and it was always done under Section 66-A of the IT Act. We challenged the IT Rules before the Supreme Court. In a judgment in a 2015 case called Shreya Singhal v. Union of India the Supreme Court read down the rules relating to intermediary liability. As for the rules, the platforms could be asked to take down the content. They didn’t have much of an option. If they don’t do that, they lose their safe harbour protection. The Court said it can only be actual knowledge and what actual knowledge means is if someone gets a court order asking them to take down the content. Or let’s say there’s direction from the government. These are the only two cases when content could be taken down.

Greene: You’ve lived in India your whole life. Has there ever been a point in your life when you felt your freedom of expression was restricted? 

Currently we are going through such a phase, where you’re careful about what you’re speaking about. There is a lot of concern about what is happening in India currently. This is something we can see mostly impacting people who are associated with civil society. When they are voicing their opinions there is now a kind of fear about how the government sees it, whether they will take any action against you for what you say, and how this could affect your organization. Because when you’re affiliated with an organization it’s not just about yourself. You also need to be careful about how anything that you say could affect the organization and your colleagues. We’ve had many instances of nonprofit organizations and journalists being targeted. So there is a kind of chilling effect when you really don’t want to say something you would otherwise say strongly. There is always a toning down of what you want to say. 

Greene: Are there any situations where you think it’s appropriate for governments to regulate online speech? 

You don’t have an absolute right to free speech under India’s Constitution. There can be restrictions as stated under Article 19(2) of the Constitution. There can be reasonable restrictions by the government, for instance, for something that could lead to violence or something which could lead to a riot between communities. So mostly if you look at hate speech on the net which could lead to a violent situation or riots between communities, that could be a case where maybe the government could intervene. And I would even say those are cases where platforms should intervene. We have seen a lot of hate speech on the net during India’s current elections as there have been different phases of elections going on for close to two months. We have seen that happening with not just political leaders but with many supporters of political parties publishing content on various platforms which aren’t really in the nature of hate speech but which could potentially create situations where you have at least two communities fighting each other. It’s definitely not a desirable situation. Those are the cases where maybe platforms themselves could regulate or maybe the government needs to regulate. In this case, for example, when it is related to elections, the Election Commission also has its role, but in many cases we don’t see that happening. 

Greene: Okay, let’s go back to hate speech for a minute because that’s always been a very difficult problem. Is that a difficult problem in India? Is hate speech well-defined? Do you think the current rules serve society well or are there problems with it? 

I wouldn’t say it’s well-defined, but even in the current law there are provisions that address it. So anything which could lead to violence or which could lead to animosity between two communities will fall in the realm of hate speech. It’s not defined as such, but then that is where your free speech rights could be restricted. That definitely could fall under the definition of hate speech. 

Greene: And do you think that definition works well? 

I mean the definition is not the problem. It’s essentially a question of how it is implemented. It’s a question of how the government or its agency implements it. It’s a question of how platforms are taking care of it. These are two issues where there’s more that needs to be done. 

Greene: You also talked about misinformation in terms of elections. How do we reconcile freedom of expression concerns with concerns for preventing misinformation? 

I would definitely say it’s a gray area. I mean how do you really balance this? But I don’t think it’s a problem which cannot be addressed. Definitely there’s a lot for civil society to do, a lot for the private sector to do. Especially, for example, when hate speech is reported to the platforms. It should be dealt with quickly, but that is where we’re seeing the worst difference in how platforms act on such reporting in the Global North versus what happens in the Global South. Platforms need to up their act when it comes to handling such situations and handling such content. 

Greene: Okay, let’s talk about the platforms then. How do you feel about censorship or restrictions on freedom of expression by the platforms? 

Things have changed a lot as to how these platforms work. Now the platforms decide what kind of content gets to your feed and how the algorithms work to promote content which is more viral. In many cases we have seen how misinformation and hate speech goes viral. And content that is debunking the misinformation which is kind of providing the real facts, that doesn’t go as far. The content that debunks misinformation doesn’t go viral or come up in your feed that fast. So that definitely is a problem, the way platforms are dealing with it. In many cases it might be economically beneficial for them to make sure that content which is viral and which puts forth misinformation reaches more eyes. 

Greene: Do you think that the platforms that are most commonly used in India—and I know there’s no TikTok in India— serve free speech interests or not? 

When the Information Technology Rules were introduced and when the discussions happened, I would say civil society supported the platforms, essentially saying these platforms ensured we can enjoy our free speech rights, people can enjoy their free speech rights and express themselves freely. How the situation changed over a period of time is interesting. Definitely these platforms are still important for us to express these rights. But when it comes to, let’s say, content being regulated, some platforms do push back when the government asks them to take down the content, but we have not seen that much. So whether they’re really the messiahs for free speech, I doubt. Over the years, we have seen that it is most often the case that when the government tells them to do something, it is in their interest to do what the government says. There has not been much pushback except for maybe Twitter challenging it in the court.  There have not been many instances where these platforms supported users. 

Greene: So we’ve talked about hate speech and misinformation, are there other types of content or categories of online speech that are either problematic in India now or at least that regulators are looking at that you think the government might try to do something with? 

One major concern which the government is trying to regulate is about deepfakes, with even the Prime Minister speaking about it. So suddenly that is something of a priority for the government to regulate. So that’s definitely a problem, especially when it comes to public figures and particularly women who are in politics who often have their images manipulated. In India we see that at election time. Even politicians who have been in the field for a long time, their images have been misused and morphed images have been circulated. So that’s definitely something that the platforms need to act on. For example, you cannot have the luxury of, let’s say, taking 48 hours to decide what to do when something like that is posted. This is something which platforms have to deal with as early as possible. We do understand there’s a lot of content and a lot of reporting happening, but in some cases, at least, there should be some prioritization of these reporting related to non-consensual sexual imagery. Maybe then the priority should go up. 

Greene: As an engineer, how do you feel about deepfake tech? Should the regulatory concerns be qualitatively different than for other kinds of false information? 

When it comes to deepfakes, I would say the problem is that it has become more mainstream. It has become very easy for a person to use these tools that have become more accessible. Earlier you needed to have specialized knowledge, especially when it came to something like editing videos. Now it’s become much easier. These tools are made easily available. The major difference now is how easy it is to access these applications. There can not be a case of fully regulating or fully controlling a technology. It’s not essentially a problem with the technology, because there would be a lot of ethical use cases. Just because something is used for a harmful purpose doesn’t mean that you completely block the technology. There is definitely a case for regulating AI and regulating deepfakes, but that doesn’t mean you put a complete stop to it. 

Greene: How do you feel about TikTok being banned in India? 

I think that’s less a question of technology or regulation and more of a geopolitical issue. I don’t think it has anything to do with the technology or even the transfer of data for that matter. I think it was just a geopolitical issue related to India/ China relations. The relations have kind of soured with the border disputes and other things, I think that was the trigger for the TikTok ban. 

Greene: What is your most significant legal victory from a human rights perspective and why? 

The victory that we had in the fight against the 2011 Rules and the portions related to intermediary liability, which was shot down by the Supreme Court. That was important because when it came to platforms and when it came to people expressing their critical views online, all of this could have been taken down very easily. So that was definitely a case of free speech rights being affected without much recourse. So that was a major victory. 

Greene: Okay, now we ask everyone this question. Who is your free speech hero and why?

I can’t think of one person, but I think of, for example, when the country went through a bleak period in the 1970s and the government declared a national state of emergency. During that time we had journalists and politicians who fought for free speech rights with respect to the news media. At that time even writing something in the publications was difficult. We had many cases of journalists who were fighting this, people who had gone to jail for writing something, who had gone to jail for opposing the government or publicly criticizing the government. So I don’t think of just one person, but we have seen journalists and political leaders fighting back during that state of emergency. I would say those are the heroes who could fight the government, who could fight law enforcement. Then there was the case of Justice H.R. Khanna, a judge who stood up for citizen’s rights and gave his dissenting opinion against the majority view, which cost him the position of Chief Justice. Maybe I would say he’s a hero, a person who was clear about constitutional values and principles.

Speaking Freely: Tomiwa Ilori

Interviewer: David Greene

*This interview has been edited for length and clarity.

Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and  a Senior ICFP Fellow at HURIDOCS.  His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are personal. 

Greene: Why don’t you start by introducing yourself?

Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects. 

Greene: Can you tell us more about the B-Tech project? 

The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs) into the tech sector. The project looks at, for example, how  social media platforms can apply human rights due diligence frameworks or processes to their products and services more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies? 

Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally? 

I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath. Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to me. 

Greene: Is there a single experience or several experiences that shaped your views about freedom of expression? 

Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language, language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices in unique and diverse ways. 

Greene: Can you talk more about whether there is something about  Yorùbá language or culture that is uniquely supportive of freedom of expression? 

There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom. 

Greene: Have you ever had a personal experience with censorship? 

Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those who were unable to afford it and for me that is problematic and qualifies as a kind of censorship. 

Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for the first time. According to the internet service provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too. 

Greene: Do you think there is ever a time when the government can justify an internet shutdown? 

The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech, misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns. 

Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity. 

This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in context. Using hate speech as an example. There’s the RabatAction Plan on Hate Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved  in the regulation of problematic content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network. 

I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet shutdowns. 

Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what types of online speech you think might be appropriately regulated by governments? 

For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However, these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent. 

Greene: In Nigeria or in Kenya or just on the continent in general? 

Yes, there are countries that still have lèse-majesté laws in criminal laws and penal codes. We’ve had countries like Nigeria that were  trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil society actors. 

So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.” 

There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by states. 

I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform  regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for profit. 

Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s something we need to worry about, who has a role in regulating it? 

State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of journalists, human rights defenders and activists. So it has really not solved any problems. 

When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of useful solutions that could be applied effectively. 

Greene: What do you feel the role of the companies should be? 

Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.

One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what happens is that corporate-speak is louder than the human rights language in many of these standards. 

That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to  educate and develop capacity on how their  platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be able to understand their contexts and be able to address the harms on their platforms better. 

Greene: What’s the most pressing free speech issue in the region in which you work? 

Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified. 

A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what, how,  when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew on. 

Greene: And finally, who is your free speech hero? 

His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.

Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.

Speaking Freely: Aji Fama Jobe

*This interview has been edited for length and clarity.

Aji Fama Jobe is a digital creator, IT consultant, blogger, and tech community leader from The Gambia. She helps run Women TechMakers Banjul, an organization that provides visibility, mentorship, and resources to women and girls in tech. She also serves as an Information Technology Assistant with the World Bank Group where she focuses on resolving IT issues and enhancing digital infrastructure. Aji Fama is a dedicated advocate working to leverage technology to enhance the lives and opportunities of women and girls in Gambia and across Africa.

Greene: Why don’t you start off by introducing yourself? 

My name is Aji Fama Jobe. I’m from Gambia and I run an organization called Women TechMakers Banjul that provides resources to women and girls in Gambia, particularly in the Greater Banjul area. I also work with other organizations that focus on STEM and digital literacy and aim to impact more regions and more people in the world. Gambia is made up of six different regions and we have host organizations in each region. So we go to train young people, especially women, in those communities on digital literacy. And that’s what I’ve been doing for the past four or five years. 

Greene: So this series focuses on freedom of expression. What does freedom of expression mean to you personally? 

For me it means being able to express myself without being judged. Because most of the time—and especially on the internet because of a lot of cyber bullying—I tend to think a lot before posting something. It’s all about, what will other people think? Will there be backlash? And I just want to speak freely. So for me it means to speak freely without being judged. 

Greene: Do you feel like free speech means different things for women in the Gambia than for men? And how do you see this play out in the work that you do? 

In the Gambia we have freedom of expression, the laws are there, but the culture is the opposite of the laws. Society still frowns on women who speak out, not just in the workspace but even in homes. Sometimes men say a woman shouldn’t speak loud or there’s a certain way women should express. It’s the culture itself that makes women not speak up in certain situations. In our culture it’s widely accepted that you let the man or the head of the family—who’s normally a man, of course—speak. I feel like freedom of speech is really important when it comes to the work we do. Because women should be able to speak freely. And when you speak freely it gives you that confidence that you can do something. So it’s a larger issue. What our organization does on free speech is address the unconscious bias in the tech space that impacts working women. I work as an IT consultant and sometimes when we’re trying to do something technical people always assume IT specialists are men. So sometimes we just want to speak up and say, “It’s IT woman, not IT guy.” 

Greene: We could say that maybe socially we need to figure this out, but now let me ask you this. Do you think the government has a role in regulating online speech? 

Those in charge of policy enforcement don’t understand how to navigate these online pieces. It’s not just about putting the policies in place. They need to train people how to navigate this thing or how to update these policies in specific situations. It’s not just about what the culture says. The policy is the policy and people should follow the rules, not just as civilians but also as policy enforcers and law enforcement. They need to follow the rules, too. 

Greene: What about the big companies that run these platforms? What’s their role in regulating online speech? 

With cyber-bullying I feel like the big companies need to play a bigger role in trying to bring down content sometimes. Take Facebook for example. They don’t have many people that work in Africa and understand Africa with its complexities and its different languages. For instance, in the Gambia we have 2.4 million people but six or seven languages. On the internet people use local languages to do certain things. So it’s hard to moderate on the platform’s end, but also they need to do more work. 

Greene: So six local languages in the Gambia? Do you feel there’s any platform that has the capability to moderate that? 

In the Gambia? No. We have some civil society that tries to report content, but it’s just civil society and most of them do it on a voluntary basis, so it’s not that strong. The only thing you can do is report it to Facebook. But Facebook has bigger countries and bigger issues to deal with, and you end up waiting in a lineup of those issues and then the damage has already been done. 

Greene: Okay, let’s shift gears. Do you consider the current government of the Gambia to be democratic? 

I think it is pretty democratic because you can speak freely after 2016 unlike with our last president. I was born in an era when people were not able to speak up. So I can only compare the last regime and the current one. I think now it’s more democratic because people are able to speak out online. I can remember back before the elections of 2016 that if you said certain things online you had to move out of the country. Before 2016 people who were abroad would not come back to Gambia for fear of facing reprisal for content they had posted online. Since 2016 we have seen people we hadn’t seen for like ten or fifteen years. They were finally able to come back. 

Greene: So you lived in the country under a non-democratic regime with the prior administration. Do you have any personal stories you could tell about life before 2016 and feeling like you were censored? Or having to go outside of the country to write something? 

Technically it was a democracy but the fact was you couldn’t speak freely. What you said could get you in trouble—I don’t consider that a democracy. 

During the last regime I was in high school. One thing I realized was that there were certain political things teachers wouldn’t discuss because they had to protect themselves. At some point I realized things changed because before 2016 we didn’t say the president’s name. We would give him nicknames, but the moment the guy left power we felt free to say his name directly. I experienced censorship from not being able to say his name or talk about him. I realized there was so much going on when the Truth, Reconciliation, and Reparations Commission (TRC) happened and people finally had the confidence to go on TV and speak about their stories. 

As a young person I learned that what you see is not everything that’s happening. There were a lot of things that were happening but we couldn’t see because the media was restricted. The media couldn’t publish certain things. When he left and through the TRC we learned about what happened. A lot of people lost their lives. Some had to flee. Some people lost their mom or dad or some got raped. I think that opened my world. Even though I’m not politically inclined or in the political space, what happened there impacted me. Because we had a political moment where the president didn’t accept the elections, and a lot of people fled and went to Senegal. I stayed like three or four months and the whole country was on lockdown. So that was my experience of what happens when things don’t go as planned when it comes to the electoral process. That was my personal experience. 

Greene: Was there news media during that time? Was it all government-controlled or was there any independent news media? 

We had some independent news media, but those were from Gambians outside of the country. The media that was inside the country couldn’t publish anything against the government. If you wanted to know what was really happening, you had to go online. At some point, WhatsApp was blocked so we had to move to Telegram and other social media. I also realized that at some point because my dad was in Iraq and I had to download a VPN so I could talk to him and tell him what was happening in the country because my mom and I were there. That’s why when people censor the internet I’m really keen on that aspect because I’ve experienced that. 

Greene: What made you start doing the work you’re doing now? 

First, when I started doing computer science—I have a computer science background—there was no one there to tell me what to do or how to do it. I had to navigate things for myself or look for people to guide me. I just thought, we don’t have to repeat the same thing for other people. That’s why we started Women TechMakers. We try to guide people and train them. We want employers to focus on skills instead of gender. So we get to train people, we have a lot of book plans and online resources that we share with people. If you want to go into a certain field we try to guide you and send you resources. That’s one of the things we do. Just for people to feel confident in their skills. And everyday people say to me, “Because of this program I was able to get this thing I wanted,” like a job or an event. And that keeps me going. Women get to feel confident in their skills and in the places they work, too. Companies are always looking for diversity and inclusion. Like, “oh I have two female developers.” At the end of the day you can say you have two developers and they’re very good developers. And yeah, they’re women. It’s not like they’re hired because they’re women, it’s because they’re skilled. That’s why I do what I do. 

Greene: Is there anything else you wanted to say about freedom of speech or about preserving online open spaces? 

I work with a lot of technical people who think freedom of speech is not their issue. But what I keep saying to people is that you think it’s not your issue until you experience it. But freedom of speech and digital rights are everybody’s issues. Because at the end of the day if you don’t have that freedom to speak freely online or if you are not protected online we are all vulnerable. It should be everybody’s responsibility. It should be a collective thing, not just government making policies. But also people need to be aware of what they’re posting online. The words you put out there can make or break someone, so it’s everybody’s business. That’s how I see digital rights and freedom of expression. As a collective responsibility. 

Greene: Okay, our last question that we ask everybody. Who is your free speech hero? 

My mom’s elder sister. She passed away in 2015, but her name is Mariama Jaw and she was in the political space even during the time when people were not able to speak. She was my hero because I went to political rallies with her and she would say what people were not willing to say. Not just in political spaces, but in general conversation, too. She’s somebody who would tell you the truth no matter what would happen, whether her life was in danger or not. I got so much inspiration from her because a lot of women don’t go into politics or do certain things and they just want to get a husband, but she went against all odds and she was a politician, a mother and sister to a lot of people, to a lot of women in her community.

Speaking Freely: Anriette Esterhuysen

*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil. This interview has been edited for length and clarity. 

Anriette Esterhuysen is a human rights defender and computer networking trailblazer from South Africa. She has pioneered the use of Internet and Communications Technologies (ICTs) to promote social justice in South Africa and throughout the world, focusing on affordable Internet access. She was the executive director of Association for Progressive Communications from 2007 to 2017.  In November 2019 Anriette was appointed by the Secretary-General of the United Nations to chair the Internet Governance Forum’s Multistakeholder Advisory Group

Greene: Can you go ahead and introduce yourself for us?

Esterhuysen: My name is Anriette Esterhuysen, I am from South Africa and I’m currently sitting here with David in Sao Paulo, Brazil. My closest association remains with the Association for Progressive Communications where I was executive director from 2000 to 2017.  I continue to work for APC as a consultant in the capacity of Senior Advisor on Internet Governance and convenor of the annual African School on Internet Governance (AfriSIG).

Greene: Can you tell us more about the African School on Internet Governance (AfriSIG)?

AfriSIG is fabulous. It differs from internet governance capacity building provided by the technical community in that it aims to build critical thinking. It also does not gloss over the complex power dynamics that are inherent to multistakeholder internet governance. It tries to give participants a hands-on experience of how different interest groups and sectors approach internet governance issues.

AfriSIG started as a result of Titi Akinsanmi,  a young Nigerian doing postgraduate studies in South Africa, approaching APC and saying, “Look, you’ve got to do something. There’s a European School of Internet Governance, there’s one in Latin America, and where is there more need for capacity-building than in Africa?” She convinced me and my colleague Emilar Vushe Gandhi, APC Africa Policy Coordinator at the time, to organize an African internet governance school in 2013 and since then it has taken place every year. It has evolved over time into a partnership between APC and the African Union Commission and Research ICT Africa.

It is a residential leadership development and learning event that takes place over 5 days. We bring together people who are already working in internet or communications policy in some capacity. We create space for conversation between people from government, civil society, parliaments, regulators, the media, business and the technical community on what in Africa are often referred to as “sensitive topics”. This can be anything from LGBTQ rights to online freedom of expression, corruption, authoritarianism, and accountable governance. We try to create a safe space for deep diving the reasons for the dividing lines between, for example, government and civil society in Africa. It’s very delicate. I love doing it because I feel that it transforms people’s thinking and the way they see one another and one another’s roles. At the end of the process, it is common for a government official to say they now understand better why civil society demands media freedom, and how transparency can be useful in protecting the interests of public servants. And civil society activists have a better understanding of the constraints that state officials face in their day-to-day work. It can be quite a revelation for individuals from civil society to be confronted with the fact that in many respects they have greater freedom to act and speak than civil servants do.

Greene: That’s great. Okay now tell me, what does free speech mean to you?

I think of it as freedom of expression. It’s fundamental. I grew up under Apartheid in South Africa and was active in the struggle for democracy. There is something deeply wrong with being surrounded by injustice, cruelty and brutality and not being allowed to speak about it. Even more so when one's own privilege comes at the expense of the oppressed, as was the case for white South Africans like myself. For me, freedom of expression is the most profound part of being human. You cannot change anything, deconstruct it, or learn about it at a human level without the ability to speak freely about what it is that you see, or want to understand. The absence of freedom of expression entrenches misinformation, a lack of understanding of what is happening around you. It facilitates willful stupidity and selective knowledge. That’s why it’s so smart of repressive regimes to stifle freedom of expression. By stifling free speech you disempower the victims of injustice from voicing their reality, on the one hand, and, on the other, you entrench the unwillingness of those who are complicit with the injustice to confront that they’re part of it.

It is impossible to shift a state of repression and injustice without speaking out about it. That is why people who struggle for freedom and justice speak about it, even if doing so gets them imprisoned, assassinated or executed. Change starts through people, the media, communities, families, social movements, and unions, speaking about what needs to change. 

Greene: Having grown up in Apartheid, is there a single personal experience or a group of personal experiences that really shaped your views on freedom of expression?

I think I was fortunate in the sense that I grew up with a mother who—based on her Christian beliefs—came to see Apartheid as being wrong. She was working as a social worker for the main state church—the Dutch Reformed Church (DRC) —at the time of the Cottesloe Consultation convened in Johannesburg by the World Council of Churches (WCC) shortly after the Sharpeville Massacre.  An outcome statement from this consultation, and later deliberations by the WCC in Geneva, condemned the DRC for its racism. In response the DRC decided to leave the WCC.  At a church meeting my mother attended she listened to the debate and someone in the church hierarchy who spoke against this decision and challenged the church for its racist stance. His words made sense to her. She spoke to him after the meeting and soon joined the organization he had started to oppose Apartheid, the Christian Institute. His name was Beyers Naudé and he became an icon of the anti-Apartheid struggle and an enemy of the apartheid state. Apparently, my first protest march was in a pushchair at a rally in 1961 to oppose the rightwing National Party government's decision for South Africa to leave the Commonwealth.

There’s no single moment that shaped my view of freedom of expression. The thing about living in the context of that kind of racial segregation and repression is that you see it every day. It’s everywhere around you, but like Nazi Germany, people—white South Africans—chose not to see it, or if they did, to find ways of rationalizing it.

Censorship was both a consequence of and a building block of the Apartheid system. There was no real freedom of expression. But because we had courageous journalists, and a broad-based political movement—above ground and underground—that opposed the regime, there were spaces where one could speak/listen/learn.  The Congress of Democrats established in the 1950s after the Communist Party was banned was a social justice movement in which people of different faiths and political ideologies (Jewish, Christian and Muslim South Africans alongside agnostics and communists) fought for justice together. Later in the 1980s, when I was a student, this broad front approach was revived through the United Democratic Front. Journalists did amazing things. When censorship was at its height during the State of Emergency in the 1980s, newspapers would go to print with columns of blacked-out text—their way of telling the world that they were being censored.

I used to type up copy filed over the phone or cassettes by reporters for the Weekly Mail when I was a student. We had to be fast because everything had to be checked by the paper’s lawyers before going to print. Lack of freedom of expression was legislated. The courage of editors and individual journalists to defy this, and if they could not, to make it obvious made a huge impact on me.

Greene: Is there a time when you, looking back, would consider that you were personally censored? 

I was very much personally censored at school. I went to an Afrikaans secondary school. And I kind of have a memory of when, after going back after a vacation, my math teacher—who I had no personal relationship with —walked past me in class and asked me how my holiday on Robben Island was. I thought, why is he asking me that? A few days later I heard from a teacher I was friendly with that there was a special staff meeting about me. They felt I was very politically outspoken in class and the school hierarchy needed to take action. No actual action was taken... but I felt watched, and through that, censored, even if not silenced.

I felt that because for me, being white, it was easier to speak out than for black South Africans, it would be wrong not to do so. As a teenager, I had already made that choice. It was painful from a social point of view because I was very isolated, I didn’t have many friends, I saw the world so differently from my peers. In 1976 when the Soweto riots broke out I remember someone in my class saying, “This is exactly what we’ve been waiting for because now we can just kill them all.”  This is probably also why I feel a deep connection with Israel/Palestine. There are many dimensions to the Apartheid analogy. The one that stands out for me is how, as was the case in South Africa too, those with power—Jewish Israelis—dehumanize and villainize the oppressed - Palestinians.

Greene: At some point did you decide that you want human rights more broadly and freedom of expression to be a part of your career?

I don’t think it was a conscious decision. I think it was what I was living for. It was the raison d’etre of my life for a long time. After high school, I had secured places at two universities. At one for a science degree and at the other for a degree in journalism. But I ended up going to a different university making the choice based on the strength of its student movement. The struggle against Apartheid was expressed and conceptualized as a struggle for human rights. The Constitution of democratic South Africa was crafted by human rights lawyers and in many respects it is a localized interpretation of the Universal Declaration.

Later, in the late 1980s,  when I started working on access to information through the use of Information and Communication Technologies (ICTS) it felt like an extension of the political work I had done as a student and in my early working life. APC, which I joined as a member—not staff—in the 1990s, was made up of people from other parts of the world who had been fighting their own struggles for freedom—Latin America, Asia, and Central/ Eastern Europe. All with very similar hopes about how the use of these technologies can enable freedom and solidarity.

Greene: So fast forward to now, currently do you think the platforms promote freedom of expression for people or restrict freedom of expression?

Not a simple question. Still, I think the net effect is more freedom of expression. The extent of online freedom of expression is uneven and it’s distorted by the platforms in some contexts. Just look at the biased pro-Israel way in which several platforms moderate content. Enabling hate speech in contexts of conflict can definitely have a silencing effect. By not restricting hate in a consistent manner, they end up restricting freedom of expression.  But I think it’s disingenuous to say that overall the internet does not increase freedom of expression. And social media platforms, despite their problematic business models, do contribute. They could of course do it so much better, fairly and consistently, and for not doing that they need to be held accountable. 

Greene: We can talk about some of the problems and difficulties. Let’s start with hate speech. You said it’s a problem we have to tackle. How do we tackle it? 

You’re talking to a very cynical old person here. I think that social media amplifies hate speech. But I don’t think they create the impulse to hate. Social media business models are extractive and exploitative. But we can’t fix our societies by fixing social media. I think that we have to deal with hate in the offline world. Channeling energy and resources into trying to grow tolerance and respect for human rights in the online space is not enough. It’s just dealing with the symptoms of intolerance and populism. We need to work far harder to hold people, particularly those with power, accountable for encouraging hate (and disinformation). Why is it easy to get away with online hate in India? Because Modi likes hate. It’s convenient for him, it keeps him in political power. Trump is another example of a leader that thrives on hate. 

What’s so problematic about social media platforms is the monetization of this. That is absolutely wrong and should be stopped—I can say all kinds of things about it. We need to have a multi-pronged approach. We need market regulation, perhaps some form of content regulation, and new ways of regulating advertising online. We need access to data on what happens inside these platforms. Intervention is needed, but I do not believe that content control is the right way to do it.  It is the business model that is at the root of the problem. That’s why I get so frustrated with this huge global effort by governments (and others)  to ensure information integrity through content regulation. I would rather they spend the money on strengthening independent media and journalism.

Greene: We should note we are currently at an information integrity conference today. In terms of hate speech, are there hazards to having hate speech laws? 

South Africa has hate speech laws which I believe are necessary. Racial hate speech continues to be a problem in South Africa. So is xenophobic hate speech. We have an election coming on May 29 [2024] and I was listening to talk radio on election issues and hearing how political parties use xenophobic tropes in their campaigns was terrifying. “South Africa has to be for South Africans.” “Nigerians run organized crime.”  “All drugs come from Mozambique,” and so on. Dangerous speech needs to be called out.  Norms are important. But I think that establishing legalized content regulation is risky. In contexts without robust protection for freedom of expression, such regulation can easily be abused by states to stifle political speech.

Greene: Societal or legal norms?

Both.  Legal norms are necessary because social norms can be so inconsistent, volatile. But social norms shape people’s everyday experience and we have to strive to make them human rights aware. It is important to prevent the abuse of legal norms—and states are, sadly, pretty good at doing just that. In the case of South Africa hate speech regulation works relatively well because there are strong protections for freedom of expression. There are soft and hard law mechanisms. The South African Human Rights Commission developed a social media charter to counter harmful speech online as a kind of self-regulatory tool. All of this works—not perfectly of course—because we have a constitution that is grounded in human rights. Where we need to be more consistent is in holding politicians accountable for speech that incites hate. 

Greene: So do we want checks and balances built into the regulatory scheme or are you just wanting it existing within a government scheme that has checks and balances built in? 

I don’t think you need new global rule sets. I think the existing international human rights framework provides what we need and just needs to be strengthened and its application adapted to emerging tech. One of the reasons why I don’t think we should be obsessive about restricting hate speech online is because it is a canary in a coal mine. In societies where there’s a communal or religious conflict or racial hate,  removing its manifestation online could be a missed opportunity to prevent explosions of violence offline.  That is not to say that there should not be recourse and remedy for victims of hate speech online. Or that those who incite violence should not be held accountable. But I believe we need to keep the bar high in how we define hate speech—basically as speech that incites violence.  

South Africa is an interesting case because we have very progressive laws when it comes to same-sex marriage, same-sex adoption, relationships, insurance, spousal recognition, medical insurance and so on, but there’s still societal prejudice, particularly in poor communities.  That is why we need a strong rights-oriented legal framework.

Greene: So that would be another area where free speech can be restricted and not just from a legal sense but you think from a higher level principles sense. 

Right. Perhaps what I am trying to say is that there is speech that incites violence and it should be restricted. And then there is speech that is hateful and discriminatory, and this should be countered, called out, and challenged, but not censored.  When you’re talking about the restriction—or not even the restriction but the recognition and calling out of—harmful speech it’s important not just to do that online. In South Africa stopping xenophobic speech online or on public media platforms would be relatively simple. But it’s not going to stop xenophobia in the streets.  To do that we need other interventions. Education, public awareness campaigns, community building, and change in the underlying conditions in which hate thrives which in our case is primarily poverty and unemployment, lack of housing and security.

Greene: This morning someone who spoke at this event was speaking about misinformation said, “The vast majority of misinformation is online.” And certainly in the US, researchers say that’s not true, most of it is on cable news, but it struck me that someone who is considered an expert should know better. We have information ecosystems and online does not exist separately. 

It’s not separate. Agree. There’s such a strong tendency to look at online spaces as an alternative universe. Even in countries with low internet penetration, there’s a tendency to focus on the online components of these ecosystems. Another example would be child online protection. Most child abuse takes place in the physical world, and most child abusers are close family members, friends or teachers of their victims—but there is a global obsession with protecting children online.  It is a shortsighted and ‘cheap’ approach and it won’t work. Not for dealing with misinformation or for protecting children from abuse.

Greene: Okay, our last question we ask all of our guests. Who is your free speech hero? 

Desmond Tutu. I have many free speech heroes but Bishop Tutu is a standout because he could be so charming about speaking his truths. He was fearless in challenging the Apartheid regime. But he would also challenge his fellow Christians.  One of his best lines was, “If LGBT people are not welcome in heaven, I’d rather go to the other place.”  And then the person I care about and fear for every day is Egyptian blogger Alaa Abd el-Fattah. I remember walking at night through the streets of Cairo with him in 2012. People kept coming up to him, talking to him, and being so obviously proud to be able to do so. His activism is fearless. But it is also personal, grounded in love for his city, his country, his family, and the people who live in it. For Alaa freedom of speech, and freedom in general, was not an abstract or a political goal. It was about freedom to love, to create art, music, literature and ideas in a shared way that brings people joy and togetherness.

Greene: Well now I have a follow-up question. You said you think free speech is undervalued these days. In what ways and how do we see that? 

We see it manifested in the absence of tolerance, in the increase in people claiming that their freedoms are being violated by the expression of those they disagree with, or who criticize them. It’s as if we’re trying to establish these controlled environments where we don’t have to listen to things that we think are wrong, or that we disagree with. As you said earlier, information ecosystems have offline and online components. Getting to the “truth” requires a mix of different views, disagreement, fact-checking, and holding people who deliberately spread falsehoods accountable for doing so. We need people to have the right to free speech, and to counter-speech. We need research and evidence gathering, investigative journalism, and, most of all, critical thinking. I’m not saying there shouldn't be restrictions on speech in certain contexts, but do it because the speech is illegal or actively inciteful. Don’t do it because you think it will achieve so-called information integrity. And especially, don’t do it in ways that undermine the right to freedom of expression.

Speaking Freely: Marjorie Heins

This interview has been edited for length and clarity.*

Marjorie Heins is a writer, former civil rights/civil liberties attorney, and past director of the Free Expression Policy Project (FEPP) and the American Civil Liberties Union's Arts Censorship Project. She is the author of "Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge," which won the Hugh M. Hefner First Amendment Award in Book Publishing in 2013, and "Not in Front of the Children: Indecency, Censorship, and the Innocence of Youth," which won the American Library Association's Eli Oboler Award for Best Published Work in the Field of Intellectual Freedom in 2002. 

Her most recent book is "Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project." She has written three other books and scores of popular and scholarly articles on free speech, censorship, constitutional law, copyright, and the arts. She has taught at New York University, the University of California - San Diego, Boston College Law School, and the American University of Paris. Since 2015, she has been a volunteer tour guide at the Metropolitan Museum of Art in New York City.

Greene: Can you introduce yourself and the work you’ve done on free speech and how you got there?

Heins: I’m Marjorie Heins, I’m a retired lawyer. I spent most of my career at the ACLU. I started in Boston, where we had a very small office, and we sort of did everything—some sex discrimination cases, a lot of police misconduct cases, occasionally First Amendment. Then, after doing some teaching and a stint at the Massachusetts Attorney General’s office, I found myself in the national office of the ACLU in New York, starting a project on art censorship. This was in response to the political brouhaha over the National Endowment for the Arts starting around 1989/ 1990.

Culture wars, attacks on some of the grants made by the NEA, became a big hot button issue. The ACLU was able to raise a little foundation money to hire a lawyer to work on some of these cases. And one case that was already filed when I got there was National Endowment for the Arts vs Finley. It was basically a challenge by four theater performance artists whose grants had been recommended by the peer panel but then ultimately vetoed by the director after a lot of political pressure because their work was very much “on the edge.” So I joined the legal team in that case, the Finley case, and it had a long and complicated history. Then, by the mid-1990s we were faced with the internet. And there were all these scares over pornography on the internet poisoning the minds of our children. So the ACLU got very involved in challenging censorship legislation that had been passed by Congress, and I worked on those cases.

I left the ACLU in 1998 to write a book about what I had learned about censorship. I was curious to find out more about the history primarily of obscenity legislation—the censorship of sexual communications. So it’s a scholarly book called “Not in front of the Children.” Among the things I discovered is that the origins of censorship of sexual content, sexual communications, come out of this notion that we need to protect children and other “vulnerable beings.” And initially that included women and uneducated people, but eventually it really boiled down to children—we need censorship basically of everybody in order to protect children. So that’s what Not in front of the Children was all about. 

And then I took my foundation contacts—because at the ACLU if you have a project you have to raise money—and started a little project, a little think tank which became affiliated with the National Coalition Against Censorship called the Free Expression Policy Project. And at that point we weren’t really doing litigation anymore, we were doing a lot of friend of the court briefs, a lot of policy reports and advocacy articles about some of the values and competing interests in the whole area of free expression. And one premise of this project, from the start, was that we are not absolutists. So we didn’t accept the notion that because the First Amendment says “Congress shall make no law abridging the freedom of speech,” then there’s some kind of absolute protection for something called free speech and there can’t be any exceptions. And, of course, there are many exceptions. 

So the basic premise of the Free Expression Policy Project was that some exceptions to the First Amendment, like obscenity laws, are not really justified because they are driven by different ideas about morality and a notion of moral or emotional harm rather than some tangible harm that you can identify like, for example, in the area of libel and slander or invasion of privacy or harassment. Yes, there are exceptions. The default, the presumption, is free speech, but there could be many reasons why free speech is curtailed in certain circumstances. 

The Free Expression Policy Project continued for about seven years. It moved to the Brennan Center for Justice at NYU Law School for a while, and, finally, I ran out of ideas and funding. I kept up the website for a little while longer, then ultimately ended the website. Then I thought, “okay, there’s a lot of good information on this website and it’s all going to disappear, so I’m going to put it into a book.” Oh, I left out the other book I worked on in the early 2000s – about academic freedom, the history of academic freedom, called “Priests of Our Democracy: The Supreme Court, Academic Freedom, and the Anti-Communist Purge.” This book goes back in history even before the 1940s and 1950s Red Scare and the effect that it had on teachers and universities. And then this last book is called “Ironies and Complications of Free Speech: News and Commentary From the Free Expression Policy Project,” which is basically an anthology of the best writings from the Free Expression Policy Project. 

And that’s me. That’s what I did.

Greene: So we have a ton to talk about because a lot of the things you’ve written about are either back in the news and regulatory cycle or never left it. So I want to start with your book “Not in Front of the Children” first. I have at least one copy and I’ve been referring to it a lot and suggesting it because we’ve just seen a ton of efforts to try and pass new child protection laws to protect kids from online harms. And so I’m curious, first there was a raft of efforts around Tik Tok being bad for kids, now we’re seeing a lot of efforts aimed at shielding kids from harmful material online. Do you think this a throughline from concerns back from mid-19th Century England. Is it still the same debate or is there something different about these online harms? 

Both are true I think. It’s the same and it’s different. What’s the same is that using the children as an argument for basically trying to suppress information, ideas, or expression that somebody disapproves of goes back to the beginning of censorship laws around sexuality. And the subject matters have changed, the targets have changed. I’m not too aware of new proposals for internet censorship of kids, but I’m certainly aware of what states—of course, Florida being the most prominent example—have done in terms of school books, school library books, public library books, and education from not only k-12 but also higher education in terms of limiting the subject matters that can be discussed. And the primary target seems to be anything to do with gay or lesbian sexuality and anything having to do with a frank acknowledgement of American slavery or Jim Crow racism. Because the argument in Florida, and this is explicit in the law, is because it would make white kids feel bad, so let’s not talk about it. So in that sense the two targets that I see now—we’ve got to protect the kids against information about gay and lesbian people and information about the true racial history of this country—are a little different from the 19th century and even much of the 20th century. 

Greene: One of the things I see is that the harms motivating the book bans and school restrictions are the same harms that are motivating at least some of the legislators who are trying to pass these laws. And notably a lot of the laws only address online harmful material without being specific about subject matter. We’re still seeing some that are specifically about sexual material, but a lot of them, including the Kids Online Safety Act really just focus on online harms more broadly. 

I haven’t followed that one, but it sounds like it might have a vagueness problem!

Greene: One of the things I get concerned about with the focus on design is that, like, a state Attorney General is not going to be upset if the design has kids reading a lot of bible verses or tomes about being respectful to your parents. But they will get upset and prosecute people if the design feature is recommending to kids gender-affirming care or whatever. I just don’t know if there’s a way of protecting against that in a law. 

Well, as we all know, when we’re dealing with commercial speech there’s a lot more leeway in terms of regulation, and especially if ads are directed at kids. So I don’t have a problem with government legislation in the area of restricting the kinds of advertising that can be directed at kids. But if you get out of the area of commercial speech and to something that’s kind of medical, could you have constitutional legislation that prohibited websites from directing kids to medically dangerous procedures? You’re sort of getting close to the borderline. If it’s just information then I think the legislation is probably going to be unconstitutional even if it’s related to kids. 

Greene: Let’s shift to academic freedom. Which is another fraught issue. What do you think of the current debates now over both restrictions on faculty and universities restricting student speech? 

Academic freedom is under the gun from both sides of the political spectrum. For example, Diversity, Equity, and Inclusion (DEI) initiatives, although they seem well-intentioned, have led to some pretty troubling outcomes. So that when those college presidents were being interrogated by the members of Congress (in December 2023), they were in a difficult position, among other reasons, because at least at Harvard and Penn it was pretty clear there were instances of really appalling applications of this idea of Diversity, Equity, and Inclusion – both to require a certain kind of ideological approach and to censor or punish people who didn’t go along with the party line, so to speak. 

The other example I’m thinking of, and I don’t know if Harvard and Penn do this – I know that the University of California system does it or at least it used to – everybody who applies for a faculty position has to sign a diversity statement, like a loyalty oath, saying that these are the principles they agree with and they will promise to promote. 

And you know you have examples, I mean I may sound very retrograde on this one, but I will not use the pronoun “they” for a singular person. And I know that would mean I couldn’t get a faculty job! And I’m not sure if my volunteer gig at the Met museum is going to be in trouble because they, very much like universities, have given us instructions, pages and pages of instructions, on proper terminology – what terminology is favored or disfavored or should never be used, and “they” is in there. You can have circumlocutions so you can identify a single individual without using he or she if that individual – I mean you can’t even know what the individual’s preference is. So that’s another example of academic freedom threats from I guess you could call the left or the DEI establishment. 

The right in American politics has a lot of material, a lot of ammunition to use when they criticize universities for being too politically correct and too “woke.” On the other hand, you have the anti-woke law in Florida which is really, as I said before, directed against education about the horrible racial history of this country. And some of those laws are just – whatever you may think about the ability of state government and state education departments to dictate curriculum and to dictate what viewpoints are going to be promoted in the curriculum – the Florida anti-woke law and don’t say gay law really go beyond I think any kind of discretion that the courts have said state and local governments have to determine curriculum. 

Greene: Are you surprised at all that we’re seeing that book bans are as big of a thing now as they were twenty years ago? 

Well, nothing surprises me. But yes, I would not have predicted that there were going to be the current incarnations of what you can remember from the old days, groups like the American Family Association, the Christian Coalition, the Eagle Forum, the groups that were “culture warriors” who were making a lot of headlines with their arguments forty years ago against even just having art that was done by gay people. We’ve come a long way from that, but now we have Moms for Liberty and present-day incarnations of the same groups. The homophobia agenda is a little more nuanced, it’s a little different from what we were seeing in the days of Jesse Helms in Congress. But the attacks on drag performances, this whole argument that children are going to be groomed to become drag queens or become gay—that’s a little bit of a different twist, but it’s basically the same kind of homophobia. So it’s not surprising that it’s being churned up again if this is something that politicians think they can get behind in order to get elected. Or, let me put it another way, if the Moms for Liberty type groups make enough noise and seem to have enough political potency, then politicians are going to cater to them. 

And so the answer has to be groups on the other side that are making the free expression argument or the intellectual freedom argument or the argument that teachers and professors and librarians are the ones who should decide what books are appropriate. Those groups have to be as vocal and as powerful in order to persuade politicians that they don’t have to start passing censorship legislation in order to get votes.

Greene: Going back to the college presidents and being grilled on the hill, you wrote that maybe there was, in response to the genocide question, which I think they were most sharply criticized there, that there was a better answer that they could have given. Could you talk about that? 

I think in that context, both for political reasons and for reasons of policy and free speech doctrine, the answer had to be that if students on campus are calling for genocide of Jews or any other ethnic or religious group that should not be permitted on campus and that amounts to racial harassment. Of course, I suppose you could imagine scenarios where two antisemitic kids in the privacy of their dorm room said this and nobody else heard it—okay, maybe it doesn’t amount to racial harassment. But private colleges are not bound by the First Amendment. They all have codes of civility. Public colleges are bound by the First Amendment, but not the same standards as the public square. So I took the position that in that circumstance the presidents had to answer, “Yes, that would violate our policies and subject a student to discipline.” But that’s not the same as calling for the intifada or calling for even the elimination of the state of Israel as having been a mistake 75 years ago. So I got a little pushback on that little blog post that I wrote. And somebody said, “I’m surprised a former ACLU lawyer is saying that calling for genocide could be punished on a college campus.” But you know, the ACLU has many different political opinions within both the staff and Board. There were often debates on different kinds of free speech issues and where certain lines are drawn. And certainly on issues of harassment and when hate speech becomes harassment—under what circumstances it becomes harassment. So, yes, I think that’s what they should have said. A lot of legal scholars, including David Cole of the ACLU, said they gave exactly the right answer, the legalistic answer, that it depends on the context. In that political situation that was not the right answer. 

Greene: It was awkward. They did answer as if they were having an academic discussion and not as if they were talking to members of Congress. 

Well they also answered as if they were programmed. I mean Claudine Gay repeated the exact same words that probably somebody had told her to say at least twice if not more. And that did not look very good. It didn’t look like she was even thinking for herself. 

Greene: I do think they were anticipating the followup question of, “Well isn’t saying ‘From the River to the Sea’ a call for genocide and how come you haven’t punished students for that?” But as you said, that would then lead into a discussion of how we determine what is or is not a call for genocide. 

Well they didn’t need a followup question because to Elise Stefanik, “Intifada” or “from the river to the sea” was equivalent to a call for genocide, period, end of discussion. Let me say one more thing about these college hearings. What these presidents needed to say is that it’s very scary when politicians start interrogating college faculty or college presidents about curriculum, governance, and certainly faculty hires. One of the things that was going on there was they didn’t think there were enough conservatives on college faculties, and that was their definition of diversity. You have to push back on that, and say it is a real threat to academic freedom and all of the values that we talk about that are important at a university education when politicians start getting their hands on this and using funding as a threat and so forth. They needed to say that. 

 Greene: Let’s pull back and talk about free speech principles more broadly. Why is, after many years of work in this area, why do you think free expression is important? 

What is the value of free expression more globally? [laughs] A lot of people have opined on that. 

Greene: Why is it important to you personally? 

Well I define it pretty broadly. So it doesn’t just include political debate and discussion and having all points of view represented in the public square, which used to be the narrower definition of what the First Amendment meant, certainly according to the Supreme Court. But the Court evolved. And so it’s now recognized, as it should be, that free expression includes art. The movies—it doesn’t even have to be verbal—it can be dance, it can be abstract painting. All of the arts, which feed the soul, are part of free expression. And that’s very important to me because I think it enriches us. It enriches our intellects, it enriches our spiritual lives, our emotional lives. And I think it goes without saying that political expression is crucial to having a democracy, however flawed it may be. 

Greene: You mentioned earlier that you don’t consider yourself to be a free speech absolutist. Do you consider yourself to be a maximalist or an enthusiast? What do you see as being sort of legitimate restrictions on any individual’s freedom of expression?

Well, we mentioned this at the beginning. There are a lot of exceptions to the First Amendment that are legitimate and certainly, when I started at the ACLU I thought that defamation laws and libel and slander laws violate the first amendment. Well, I’ve changed my opinion. Because there’s real harm that gets caused by libel and slander. As we know, the Supreme Court has put some First Amendment restrictions around those torts, but they’re important to have. Threats are a well-recognized exception to the freedom of speech, and the kind of harm caused by threats, even if they’re not followed through on, is pretty obvious. Incitement becomes a little trickier because where do you draw the lines? But at some point an incitement to violent action I think can be restricted for obvious reasons of public safety. And then we have restrictions on false advertising but, of course, if we’re not in the commercial context, the Supreme Court has told us that lies are protected by the First Amendment. That’s probably wise just in terms of not trying to get the government and the judicial process involved in deciding what is a lie and what isn’t. But of course that’s done all the time in the context of defamation and commercial speech. Hate speech is something, as we know, that’s prohibited in many parts of Europe but not here. At least not in the public square as opposed to employment contexts or educational contexts. Some people would say, “Well, that’s dictated by the First Amendment and they don’t have the First Amendment over there in Europe, so we’re better.” But having worked in this area for a long time and having read many Supreme Court decisions, it seems to me the First Amendment has been subjected to the same kind of balancing test that they use in Europe when they interpret their European Convention on Human Rights or their individual constitutions. They just have different policy choices. And the policy choice to prohibit hate speech given the history of Europe is understandable. Whether it is effective in terms of reducing racism, Islamophobia, antisemitism… is there more of that in Europe than there is here? Hard to know. It’s probably not that effective. You make martyrs out of people who are prosecuted for hate speech. But on the other hand, some of it is very troubling. In the United States Holocaust denial is protected. 

Greene: Can you talk a little bit about your experience being a woman advocating for first amendment rights for sexual expression during a time when there was at least some form of feminist movement saying that some types of sexualization of women was harmful to women? 

That drove a wedge right through the feminist movement for quite a number of years. There’s still some of that around, but I think less. The battle against pornography has been pretty much a losing battle. 

Greene: Are there lessons from that time? You were clearly on one side of it, are there lessons to be learned from that when we talk about sort of speech harms? 

One of the policy reports we did at the Free Expression Policy Project was on media literacy as an alternative to censorship. Media literacy can be expanded to encompass a lot of different kinds of education. So if you had decent sex education in this country and kids were able to think about the kinds of messages that you see in commercial pornography and amateur pornography, in R-rated movies, in advertising—I mean the kind of sexist messages and demeaning messages that you see throughout the culture—education is the best way of trying to combat some of that stuff. 

Greene: Okay, our final question that we ask everyone. Who is your free speech hero? 

When I started working on “Priests of our Democracy” the most important case, sort of the culmination of the litigation that took place challenging loyalty programs and loyalty oaths, was a case called Keyishian v. Board of Regents. This is a case in which Justice Brennan, writing for a very slim majority of five Justices, said academic freedom is “a special concern of the First Amendment, which does not tolerate laws that cast a pall of orthodoxy over the classroom.” Harry Keyishian was one of the five plaintiffs in this case. He was one of five faculty members at the University of Buffalo who refused to sign what was called the Feinberg Certificate, which was essentially a loyalty oath. The certificate required all faculty to say “I’ve never been a member of the Communist Party and if I was, I told the President and the Dean all about it.” He was not a member of the Communist Party, but as Harry said much later in an interview – because he had gone to college in the 1950s and he saw some of the best professors being summarily fired for refusing to cooperate with some of these Congressional investigating committees – fast forward to the Feinberg Certificate loyalty oath: he said his refusal to sign was his “revenge on the 1950s.” And so he becomes the plaintiff in this case that challenges the whole Feinberg Law, this whole elaborate New York State law that basically required loyalty investigations of every teacher in the public system. So Harry became my hero. I start my book with Harry. The first line in my book is, “Harry Keyishian was a junior at Queen’s College in the Fall of 1952 when the Senate Internal Security Subcommittee came to town.” And he’s still around. I think he just had his 90th birthday!

 

Speaking Freely: Tanka Aryal

*This interview took place in April 2024 at NetMundial+10 in São Paulo, Brazil and has been edited for length and clarity. 

Tanka Aryal is the President of Digital Rights Nepal. He is an attorney practicing at the Supreme Court of Nepal. He has long worked to promote digital rights, the right to information, freedom of expression, civic space, accountability, and internet freedom nationally for the last 15 years. Mr. Aryal holds two LLM degrees in International Human Rights Laws from Kathmandu School of Law and Central European University Hungary. Additionally, he completed different degrees from Oxford University UK and Tokiwa University Japan. Mr. Aryal has worked as a consultant and staff with different national international organizations including FHI 360, International Center for Not-for-profit Law (ICNL), UNESCOWorld Bank, ARTICLE 19, United Nations Development Programme (UNDP), ISOC, and the United Nations Department of Economic and Social Affairs (UNDESA/DPADM). Mr. Aryal led a right information campaign throughout the country for more than 4 years as the Executive Director of Citizens’ Campaign for Right to Information

Greene: Can you introduce yourself? And can you tell me what kind of work your organization does on freedom of speech in particular? 

I am Tanka Aryal, I’m from Nepal and I represent Digital Rights Nepal. Looking at my background of work, I have been working on freedom of expression for the last twenty years. Digital Rights Nepal is a new organization that started during COVID when a number of issues came up particularly around freedom of expression online and the use of different social media platforms expressing the ideas of every individual representing different classes, castes, and groups of society. The majority of work done by my organization is particularly advocating for freedom of expression online as well as data privacy and protection. This is the domain we work in mainly, but in the process of talking about and advocating for freedom of expression we also talk about access to information, online information integrity, misinformation, and disinformation. 

Greene: What does free speech mean to you personally?

It’s a very heavy question! I know it’s not an absolute right—it has limitations. But I feel like if I am not doing any harm to other individuals or it’s not a mass security type of thing, there should not be interference from the government, platforms, or big companies. At the same time, there are a number of direct and indirect undue influences from the political wings or the Party who is running the government, which I don’t like. No interference in my thoughts and expression—that is fundamental for me with freedom of expression. 

Greene: Do you consider yourself to be passionate about freedom of expression?

Oh yes. What I’ve realized is, if I consider the human life, existence starts once you start expressing yourself and dealing and communicating with others. So this is the very fundamental freedom for every human being. If this part of rights is taken away then your life, my life, as a human is totally incomplete. That’s why I’m so passionate about this right. Because this right has created a foundation for other rights as well. For example, if I speak out and demand my right to education or the right to food, if my right to speak freely is not protected, then those other rights are also at risk.

Greene: Do you have a personal experience that shaped how you feel about freedom of expression?

Yes. I don’t mean this in a legal sense, but my personal understanding is that if you are participating in any forum, unless you express your ideas and thoughts, then you are very hardly counted. This is the issue of existence and making yourself exist in society and in community. What I realized was that when you express your ideas with the people and the community, then the response is better and sometimes you get to engage further in the process. If I would like to express myself, if there are no barriers, then I feel comfortable. In a number of cases in my life and journey dealing with the government and media and different political groups, if I see some sort of barriers or external factors that limit me speaking, then that really hampers me. I realize that that really matters. 

Greene: In your opinion, what is the state of freedom of expression in Nepal right now? 

It’s really difficult. It’s not one of those absolute types of things. There are some indicators of where we stand. For instance, where we stand on the Corruption Index, where we stand on the Freedom of Expression Index. If I compare the state of freedom of expression in Nepal, it’s definitely better than the surrounding countries like India, Bangladesh, Pakistan, and China. But, learning from these countries, my government is trying to be more restrictive. Some laws and policies have been introduced that limit freedom of expression online. For instance, Tik Tok is banned by the government. We have considerably good conditions, but still there is room to improve in a way that you can have better protections for expression. 

Greene: What was the government’s thinking with banning TikTok?

There are a number of interpretations. Before banning TikTok the government was seen as pro-China. Once the government banned TikTok—India had already banned it—that decision supported a narrative that the government is leaning to India rather than China. You know, this sort of geopolitical interpretation. A number of other issues were there, too. Platforms were not taking measures even for some issues that shouldn’t have come through the platforms. So the government took the blanket approach in a way to try to promote social harmony and decency and morality. Some of the content published on TikTok was not acceptable, in my opinion, as a consumer myself. But the course of correction could have been different, maybe regulation or other things. But the government took the shortcut way by banning Tik Tok, eliminating the problem. 

Greene: So a combination of geopolitics and that they didn’t like what people were watching on TikTok? 

Actually there are a number of narratives told by the different blocks of people, people with different ideas and the different political wings. It was said that the government—the Maoist leader is the prime minister—considers the very rural people as their vote bank. The government sees them as less literate, brain-washed types of people. “Okay, this is my vote bank, no one can sort of toss it.” Then once TikTok became popular the TikTok users were the very rural people, women, marginalized people. So they started using Tik Tok asking questions to the government and things like that. It was said that the Maoist party was not happy with that. “Okay, now our vote bank is going out of our hands so we better block TikTok and keep them in our control.” So that is the narrative that was also discussed. 

Greene: It’s similar in the US, we’re dealing with this right now. Similarly, I think it’s a combination of the geopolitics just with a lot of anti-China sentiment in the US as well as a concern around, “We don’t like what the kids are doing on TikTok and China is going to use it to serve political propaganda and brainwash US users.”

In the case of the US and India, TikTok was banned for national security. But in our case, the government never said, “Okay, TikTok is banned for our national security.” Rather, they were focusing on content that the government wasn’t happy with. 

Greene: Right, and let me credit Nepal there for their candor, though I don’t like the decision. Because I personally don’t think the United States government’s national security excuse is very convincing either. But what types of speech or categories of content or topics are really targeted by regulators right now for restriction? 

To be honest, the elected leaders, maybe the President, the Prime Minister, the powerholders don’t like the questions being posed to them. That is a general thing. Maybe the Mayor, maybe the Prime Minister, maybe a Minister, maybe a Chief Minister of one province—the powerholders don’t like being questioned. That is one type of speech made by the people—asking questions, asking for accountability. So that is one set of targets. Similarly, some speech that’s for the protection of the rights of the individual in many cases—like hate speech against Dalit, and women, and the LGBTQIA community—so any sort of speech or comments, any type of content, related to this domain is an issue. People don’t have the capacity to listen to even very minor critical things. If anybody says, “Hey, Tanka, you have these things I would like to be changed from your behavior.” People can say these things to me. As a public position holder I should have that ability to listen and respond accordingly. But politicians say, “I don’t want to listen to any sort of criticism or critical thoughts about me.” Particularly the political nature of the speech which seeks accountability and raises transparency issues, that is mostly targeted. 

Greene: You said earlier that as long as your speech doesn’t harm someone there shouldn’t be interference. Are there certain harms that are caused by speech that you think are more serious or that really justify regulation or laws restricting them?

It’s a very tricky one. Even if regulation is justified, if one official can ban something blanketly, it should go through judicial scrutiny. We tend to not have adequate laws. There are a number of gray areas. Those gray areas have been manipulated and misused by the government. In many cases, misused by, for example, the police. What I understood is that our judiciary is sometimes very sensible and very sensitive about freedom of expression. However, in many cases, if the issue is related to the judiciary itself they are very conservative. Two days back I read in a newspaper that there was a sting operation around one judge engaging [in corruption] with a business. And some of the things came into the media. And the judiciary was so reactive! It was not blamed on the whole judiciary, but the judiciary asked online media to remove that content. There were a number of discussions. Like without further investigation or checking the facts, how can the judiciary give that order to remove that content? Okay, one official thought that this is wrong content, and if the judiciary has the power to take it down, that’s not right and that can be misused any time. I mean, the judiciary is really good if the issues are related to other parties, but if the issue is related to the judiciary itself, the judiciary is conservative. 

Greene: You mentioned gray areas and you mentioned some types of hate speech. Is that a gray area in Nepal? 

Yeah, actually, we don’t have that much confidence in law. What we have is the Electronic Transactions Act. Section 47 says that content online can not be published if the content harms others, and so on. It’s very abstract. So that law can be misused if the government really wanted to drag you into some sort of very difficult position. 

We have been working toward and have provided input on a new law that’s more comprehensive, that would define things in proper ways that have less of a chance of being misused by the police. But it could not move ahead. The bill was drafted in the past parliament. It took lots of time, we provided input, and then after five years it could not move ahead. Then parliament dissolved and the whole thing became null. The government is not that consultative. Unlike how here we are talking [at NetMundial+10] with multi stakeholder participation—the government doesn’t bother. They don’t see incentive for engaging civil society. Rather they consider if we can give them the other troublemakers, let’s keep them away and pass the law. That is the idea they are practicing. We don’t have very clear laws, and because we don’t have clear laws some people really violate fundamental principles. Say someone was attacking my privacy or I was facing defamation issues. The police are very shorthanded, they can’t arrest that person even if they’re doing something really bad. In the meantime, the police, if they have a good political nexus and they just want to drag somebody, they can misuse it. 

Greene: How do you feel about private corporations being gatekeepers of speech? 

It’s very difficult. Even during election time the Election Commission issued an Election Order of Conduct, you could see how foolish they are. They were giving the mandate to the ISPs that, “If there is a violation of this Order of Conduct, you can take it down.” That sort of blanket power given to them can be misused any time. So if you talk about our case, we don’t have that many giant corporations, of course Meta and all the  major companies are there. Particularly the government has given certain mandates to ISPs, and in many cases even the National Press Council was asking the ISP Association and the Nepal Telecommunications Authority (NTA) that regulates all ISPs. Without having a very clear mandate to the Press Council, without having a clear mandate to NTA, they are exercising power to instruct the ISPs, “Hey, take this down. Hey, don’t publish this.” So that’s the sort of mechanism and the practice out there. 

Greene: You said that Digital Rights Nepal was founded during the pandemic. What was the impetus for starting the organization? 

We were totally trapped at home, working from home, studying from home, everything from home. I had worked for a nonprofit organization in the past, advocating for freedom of expression and more, and when we were at home during COVID a number of issues came out about online platforms. Some people were able to exercise their rights because they have access to the internet, but some people didn’t have access to the internet and were unable to exercise freedom of expression. So we recognized there are a number of issues and there is a big digital divide. There are a number of regulatory gray areas in this sector. Looking at the number of kids who were compelled to do online school, their data protection and privacy was another issue. We were engaging in these e-commerce platforms to buy things and there aren’t proper regulations. So we thought there are a number of issues and nobody working on them, so let’s form this initiative. It didn’t come all of the sudden, but our working background was there and that situation really made us realize that we needed to focus our work on these issues. 

Greene: Okay, our final question. Who is your free speech hero? 

It depends. In my context, in Nepal, there are a couple of people that don’t hesitate to express their ideas even if it is controversial. There’s also Voltaire’s saying, “I defend your freedom of expression even if I don’t like the content.” He could be one of my free speech heroes. Because sometimes people are hypocrites. They say, “I try to advocate freedom of expression if it applies to you and the government and others, but if any issues come to harm me I don’t believe in the same principle.” Then people don’t defend freedom of expression. I have seen a number of people showing their hypocrisy once the time came where the speech is against them. But for me, like Voltaire says, even if I don’t like your speech I’ll defend it until the end because I believe in the idea of freedom of expression.  

EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0

EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed.

Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles.

As Louis Barclay, the developer of Unfollow Everything 1.0, explained:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools.

In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case.

The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. 

But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use.

Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0.

Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.”

Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors.

We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.

The French Detention: Why We're Watching the Telegram Situation Closely

EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users.  

On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with  French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France.  

We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely.  

There appear to be three categories of Telegram-related charges:  

  • First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram.  
  • The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.”  
  • The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France.  

Now we are left to speculate. 

It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. 

Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. 

In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned.

In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. 

EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages  on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world. 

This arrest marks an alarming escalation by a state’s authorities. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure.  

It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities.  We are monitoring the situation closely and will continue to do so.  

EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. 

Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself.

The DSA Must Not Become A Tool For Global Censorship

The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. 

The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. 

Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk.

People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. 

Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. 

Lastly, we are concerned that the Commission is reaching beyond its geographic mandate.  It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. 

Read the full letter here.

In These Five Social Media Speech Cases, Supreme Court Set Foundational Rules for the Future

The U.S. Supreme Court addressed government’s various roles with respect to speech on social media in five cases reviewed in its recently completed term. The through-line of these cases is a critically important principle that sets limits on government’s ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users’ First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation, but will not be infringed by the independent decisions of the platforms themselves.

As a general overview, the NetChoice cases, Moody v. NetChoice and NetChoice v. Paxton, looked at government’s role as a regulator of social media platforms. The issue was whether state laws in Texas and Florida that prevented certain online services from moderating content were constitutional in most of their possible applications. The Supreme Court did not rule on that question and instead sent the cases back to the lower courts to reexamine NetChoice’s claim that the statutes had few possible constitutional applications.

The court did, importantly and correctly, explain that at least Facebook’s Newsfeed and YouTube’s Homepage were examples of platforms exercising their own First Amendment rights on how to display and organize content, and the laws could not be constitutionally applied to Newsfeed and Homepage and similar sites, a preliminary step in determining whether the laws were facially unconstitutional.

Lindke v. Freed and Garnier v. O’Connor-Ratcliffe looked at the government’s role as a social media user who has an account and wants to use its full features, including blocking other users and deleting comments. The Supreme Court instructed the lower courts to first look to whether a government official has the authority to speak on behalf of the government, before looking at whether the official used their social media page for governmental purposes, conduct that would trigger First Amendment protections for the commenters.

Murthy v. Missouri, the jawboning case, looked at the government’s mixed role as a regulator and user, in which the government may be seeking to coerce platforms to engage in unconstitutional censorship or may also be a user simply flagging objectionable posts as any user might. The Supreme Court found that none of the plaintiffs had standing to bring the claims because they could not show that their harms were traceable to any action by the federal government defendants.

We’ve analyzed each of the Supreme Court decisions, Moody v. NetChoice (decided with NetChoice v. Paxton), Murthy v. Missouri, and Lindke v. Freed (decided with Garnier v. O’Connor Ratcliffe), in depth.

But some common themes emerge when all five cases are considered together.

  • Internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to  interfere with content moderation, but it will not be infringed  by the independent decisions of the platforms themselves. This principle, which EFF has been advocating for many years, is evident in each of the rulings. In Lindke, the Supreme Court recognized that government officials, if vested with and exercising official authority, could violate the First Amendment by deleting a user’s comments or blocking them from commenting altogether. In Murthy, the Supreme Court found that users could not sue the government for violating their First Amendment rights unless they could show that government coercion lead to their content being taken down or obscured, rather than the social media platform’s own editorial decision. And in the NetChoice cases, the Supreme Court explained that social media platforms typically exercise their own protected First Amendment rights when they edit and curate which posts they show to their users, and the government may violate the First Amendment when it requires them to publish or amplify posts.

  • Underlying these rulings is the Supreme Court’s long-awaited recognition that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and are often guided in this process by their own community standards or similar editorial policies. This is seen in the Supreme Court’s emphasis in Murthy that jawboning is not actionable if the content moderation was the independent decision of the platform rather than coerced by the government. And a similar recognition of independent decision-making underlies the Supreme Court’s First Amendment analysis in the NetChoice cases. The Supreme Court has now thankfully moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Supreme Court used that language to describe the process in last term’s case, Twitter v. Taamneh.

  • This terms cases also confirm that traditional First Amendment rules apply to social media. In Lindke, the Supreme Court recognized that when government controls the comments components of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. In the NetChoice cases, the Supreme Court found that platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Plenty of legal issues around social media remain to be decided. But the 2023-24 Supreme Court term has set out important speech-protective rules that will serve as the foundation for many future rulings. 

 

Victory! D.C. Circuit Rules in Favor of Animal Rights Activists Censored on Government Social Media Pages

In a big win for free speech online, the U.S. Court of Appeals for the D.C. Circuit ruled that a federal agency violated the First Amendment when it blocked animal rights activists from commenting on the agency’s social media pages. We filed an amicus brief in the case, joined by the Foundation for Individual Rights and Expression (FIRE).

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH) in 2021, arguing that the agency unconstitutionally blocked their comments opposing animal testing in scientific research on the agency’s Facebook and Instagram pages. (NIH provides funding for research that involves testing on animals.)

NIH argued it was simply implementing reasonable content guidelines that included a prohibition against public comments that are “off topic” to the agency’s social media posts. Yet the agency implemented the “off topic” rule by employing keyword filters that included words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop to block PETA activists from posting comments that included these words.

NIH’s Social Media Pages Are Limited Public Forums

The D.C. Circuit first had to determine whether the comment sections of NIH’s social media pages are designated public forums or limited public forums. As the court explained, “comment threads of government social media pages are designated public forums when the pages are open for comment without restrictions and limited public forums when the government prospectively sets restrictions.”

The court concluded that the comment sections of NIH’s Facebook and Instagram pages are limited public forums: “because NIH attempted to remove a range of speech violating its policies … we find sufficient evidence that the government intended to limit the forum to only speech that meets its public guidelines.”

The nature of the government forum determines what First Amendment standard courts apply in evaluating the constitutionality of a speech restriction. Speech restrictions that define limited public forums must only be reasonable in light of the purposes of the forum, while speech restrictions in designated public forums must satisfy more demanding standards. In both forums, however, viewpoint discrimination is prohibited.

NIH’s Social Media Censorship Violated Animal Rights Activists’ First Amendment Rights

After holding that the comment sections of NIH’s Facebook and Instagram pages are limited public forums subject to a lower standard of reasonableness, the D.C. Circuit then nevertheless held that NIH’s “off topic” rule as implemented by keyword filters is unreasonable and thus violates the First Amendment.

The court explained that because the purpose of the forums (the comment sections of NIH’s social media pages) is directly related to speech, “reasonableness in this context is thus necessarily a more demanding test than in forums that have a primary purpose that is less compatible with expressive activity, like the football stadium.”

In rightly holding that NIH’s censorship was unreasonable, the court adopted several of the arguments we made in our amicus brief, in which we assumed that NIH’s social media pages are limited public forums but argued that the agency’s implementation of its “off topic” rule was unreasonable and thus unconstitutional.

Keyword Filters Can’t Discern Context

We argued, for example, that keyword filters are an “unreasonable form of automated content moderation because they are imprecise and preclude the necessary consideration of context and nuance.”

Similarly, the D.C. Circuit stated, “NIH’s off-topic policy, as implemented by the keywords, is further unreasonable because it is inflexible and unresponsive to context … The permanent and context-insensitive nature of NIH’s speech restriction reinforces its unreasonableness.”

Keyword Filters Are Overinclusive

We also argued, related to context, that keyword filters are unreasonable “because they are blunt tools that are overinclusive, censoring more speech than the ‘off topic’ rule was intended to block … NIH’s keyword filters assume that words related to animal testing will never be used in an on-topic comment to a particular NIH post. But this is false. Animal testing is certainly relevant to NIH’s work.”

The court acknowledged this, stating, “To say that comments related to animal testing are categorically off-topic when a significant portion of NIH’s posts are about research conducted on animals defies common sense.”

NIH’s Keyword Filters Reflect Viewpoint Discrimination

We also argued that NIH’s implementation of its “off topic” rule through keyword filters was unreasonable because those filters reflected a clear intent to censor speech critical of the government, that is, speech reflecting a viewpoint that the government did not like.

The court recognized this, stating, “NIH’s off-topic restriction is further compromised by the fact that NIH chose to moderate its comment threads in a way that skews sharply against the appellants’ viewpoint that the agency should stop funding animal testing by filtering terms such as ‘torture’ and ‘cruel,’ not to mention terms previously included such as ‘PETA’ and ‘#stopanimaltesting.’”

On this point, we further argued that “courts should consider the actual vocabulary or terminology used … Certain terminology may be used by those on only one side of the debate … Those in favor of animal testing in scientific research, for example, do not typically use words like cruelty, revolting, tormenting, torture, hurt, kill, and stop.”

Additionally, we argued that “a highly regulated social media comments section that censors Plaintiffs’ comments against animal testing gives the false impression that no member of the public disagrees with the agency on this issue.”

The court acknowledged both points, stating, “The right to ‘praise or criticize governmental agents’ lies at the heart of the First Amendment’s protections … and censoring speech that contains words more likely to be used by animal rights advocates has the potential to distort public discourse over NIH’s work.”

We are pleased that the D.C. Circuit took many of our arguments to heart in upholding the First Amendment rights of social media users in this important internet free speech case.

Supreme Court Dodges Key Question in Murthy v. Missouri and Dismisses Case for Failing to Connect The Government’s Communication to Specific Platform Moderation

We don’t know a lot more about when government jawboning social media companies—that is, attempting to pressure them to censor users’ speech— violates the First Amendment; but we do know that lawsuits based on such actions will be hard to win. In Murthy v. Missouri, the U.S. Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases because none of the plaintiffs could show that any of the statements by the government they complained of were likely the cause of any specific actions taken by the social media platforms against them or that they would happen again.   

As we have written before, the First Amendment forbids the government from coercing a private entity to censor, whether the coercion is direct or subtle. This has been an important principle in countering efforts to threaten and pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication to an intermediary about users’ speech is unconstitutional; indeed, some are beneficial—for example, platforms often reach out to government actors they perceive as authoritative sources of information. And the distinction between proper and improper speech is often obscure. 

While the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

So, when do the government’s efforts to persuade one to censor another become coercion? This was a hard question prior to Murthy. And unfortunately, it remains so, though a different jawboning case also recently decided provides some clarity. 

Rather than provide guidance to courts about the line between permissible and impermissible government communications with platforms about publishing users’ speech, the Supreme Court dismissed Murthy, holding that every plaintiff lacked “standing” to bring the lawsuit. That is, none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. So, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

The through line between this case and Moody v. Netchoice, decided by the Supreme Court a few weeks later, is that social media platforms have a First Amendment right to moderate the speech any user sees, and, because they exercise that right routinely, a plaintiff who believes they have been jawboned must prove that it was because of the government’s dictate, not the platform’s own decision. 

Plaintiffs’ Lack Standing to Bring Jawboning Claims 

Article III of the U.S. Constitution limits federal courts to only considering “cases and controversies.” This limitation requires that any plaintiff have suffered an injury that was traceable to the defendants and which the court has the power to fix. The standing doctrine can be a significant barrier to litigants without full knowledge of the facts and circumstances surrounding their injuries, and EFF has often complained that courts require plaintiffs to prove their cases on the merits at very early stages of litigation before the discovery process. Indeed, EFF’s landmark mass surveillance litigation, Jewel v NSA, was ultimately dismissed because the plaintiffs lacked standing to sue. 

The main fault in the Murthy plaintiffs’ case was weak evidence

The standing question here differs from cases such as Jewel where courts have denied plaintiffs discovery because they couldn’t demonstrate their standing without an opportunity to gather evidence of the suspected wrongdoing. The Murthy plaintiffs had an opportunity to gather extensive evidence of suspected wrongdoing—indeed, the Supreme Court noted that the case’s factual record exceeds 26,000 pages. And the Supreme Court considered this record in its standing analysis.   

While the Supreme Court did not provide guidance on what constitutes impermissible government coercion of social media platforms in Murthy, its ruling does tell us what type of cause-and-effect a plaintiff must prove to win a jawboning case. 

A plaintiff will have to prove that the negative treatment of their speech was attributable to the government, not the independent action of the platform. This accounts for basic truths of content moderation, which we emphasized in our amicus brief: that platforms moderate all the time, often based on their community guidelines, but also often ad hoc, and informed by input from users and a variety of outside experts. 

When, as in this case, plaintiffs ask a court to stop the government from ongoing or future coercion of a platform to remove, deamplify, or otherwise obscure the plaintiffs’ speech—rather than, for example, compensate for harm caused by past coercion—those plaintiffs must show a real and immediate threat that they will be harmed again. Past incidents of government jawboning are relevant only to predict a repeat of that behavior. Further, plaintiffs seeking to stop ongoing or future government coercion must show that the platform will change its policies and practices back to their pre-coerced state should the government be ordered to stop. 

Fortunately, plaintiffs will only have to prove that a particular government actor “pressured a particular platform to censor a particular topic before that platform suppressed a particular plaintiff ’s speech on that topic.” Plaintiffs do not need to show that the government targeted their posts specifically, just the general topic of their posts, and that their posts were negatively moderated as a result.  

The main fault in the Murthy plaintiffs’ case was weak evidence that the government actually caused a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ social media posts or any particular social media post at all. Indeed, the evidence that the content moderation decisions were the platforms’ independent decisions was stronger: the platforms had all moderated similar content for years and strengthened their content moderation standards before the government got involved; they spoke not just with the government but with other outside experts; and they had independent, non-governmental incentives to moderate user speech as they did. 

The Murthy plaintiffs also failed to show that the government jawboning they complained of, much of it focusing on COVID and vaccine posts, was continuing. As the Court noted, the government appears to have ceased those efforts. It was not enough that the plaintiffs continue to suffer ill effects from that past behavior. 

And lastly, the plaintiffs could not show that the order they sought from the courts preventing the government from further jawboning would actually cure their injuries, since the platforms may still exercise independent judgment to negatively moderate the plaintiffs’ posts even without governmental involvement. 

 The Court Narrows the Right to Listen 

The right to listen and receive information is an important First Amendment right that has typically allowed those who are denied access to censored speech to sue to regain access. EFF has fervently supported this right. 

But the Supreme Court’s opinion in Murthy v. Missouri narrows this right. The Court explains that only those with a “concrete, specific connection to the speaker” have standing to sue to challenge such censorship. At a minimum, it appears, one who wants to sue must point to specific instances of censorship that have caused them harm; it is not enough to claim an interest in a person’s speech generally or claim harm from being denied “unfettered access to social media.” While this holding rightfully applies to the States who had sought to vindicate the audience interests of their entire populaces, it is more problematic when applied to individual plaintiffs. Going forward EFF will advocate for a narrow reading of this holding. 

 As we pointed out in our amicus briefs and blog posts, this case was always a difficult one for litigating the important question of defining illegal jawboning because it was based more on a sprawling, multi-agency conspiracy theory than on specific takedown demands resulting in actual takedowns. The Supreme Court seems to have seen it the same way. 

But the Supreme Court’s Other Jawboning Case Does Help Clarify Coercion  

Fortunately, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo 

InNRA v. Vullo, the Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan 

NRA v. Vullo is a lawsuit by the National Rifle Association alleging that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. Unlike Murthy, the case came to the Supreme Court on a motion to dismiss before any discovery had been conducted and when courts are required to accept all of the plaintiffs’ factual allegations as true. 

The Supreme Court importantly affirmed that the controlling case for jawboning is Bantam Books v. Sullivan, a 1963 case in which the Supreme Court established that governments violate the First Amendment by coercing one person to censor another person’s speech over which they exercise control, what the Supreme Court called “indirect censorship.”   

In Vullo, the Supreme Court endorsed a multi-factored test that many of the lower courts had adopted, as a “useful, though nonexhaustive, guide” to answering the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: (1) word choice and tone, (2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), (3) whether the speech was perceived as a threat, and (4) whether the speech refers to adverse consequences. The Supreme Court explained that the second and third factors are related—the more authority an official wields over someone the more likely they are to perceive their speech as a threat, and the less likely they are to disregard a directive from that official. And the Supreme Court made clear that coercion may arise from ither threats or inducements.  

In our amicus brief in Murthy, we had urged the Court to make clear that an official’s intent to coerce was also highly relevant. The Supreme Court did not directly state this, unfortunately. But they did several times refer to the NRA as having properly alleged that the “coercive threats were aimed at punishing or suppressing disfavored speech.”  

At EFF, we will continue to look for cases that present good opportunities to bring jawboning claims before the courts and to bring additional clarity to this important doctrine. 

 

Platforms Have First Amendment Right to Curate Speech, As We’ve Long Argued, Supreme Court Said, But Sends Laws Back to Lower Court To Decide If That Applies To Other Functions Like Messaging

Social media platforms, at least in their most common form, have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited, the U.S. Supreme Court stated in its landmark decision in Moody v. NetChoice and NetChoice v. Paxton, which were decided together. 

The cases dealt with Florida and Texas laws that each limited the ability of online services to block, deamplify, or otherwise negatively moderate certain user speech.  

Yet the Supreme Court did not strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. 

The Supreme Court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged. 

This is an important ruling and one that EFF has been arguing for in courts since 2018. We’ve already published our high-level reaction to the decision and written about how it bears on pending social media regulations. This post is a more thorough, and much longer, analysis of the opinion and its implications for future lawsuits. 

A First Amendment Right to Moderate Social Media Content 

 The most important question before the Supreme Court, and the one that will have the strongest ramifications beyond the specific laws being challenged here, is whether social media platforms have their own First Amendment rights, independent of their users’ rights, to decide what third-party content to present in their users’ feeds, recommend, amplify, deamplify, label, or block.  The lower courts in the NetChoice cases reached opposite conclusions, with the 11th Circuit considering the Florida law finding a First Amendment right to curate, and the 5th Circuit considering the Texas law refusing to do so. 

The Supreme Court appropriately resolved that conflict between the two appellate courts and answered this question yes, treating social media platforms the same as other entities that compile, edit, and curate the speech of others, such as bookstores, newsstands, art galleries, parade organizers, and newspapers.  As Justice Kagan, writing for the court’s majority, wrote, “the First Amendment offers protection when an entity engaging in expressive activity, including compiling and curating others’ speech, is directed to accommodate messages it would prefer to exclude.”   

As the Supreme Court explained,  

Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product. When the government interferes with such editorial choices—say, by ordering the excluded to be included—it alters the content of the compilation. (It creates a different opinion page or parade, bearing a different message.) And in so doing—in overriding a private party’s expressive choices—the government confronts the First Amendment. 

The court thus chose to apply the line of precedent from  Miami Herald Co. v. Tornillo —in which the Supreme Court in 1973 struck down a law that required newspapers that endorsed a candidate for office to provide space to that candidate’s opponents to reply—and rejected the line of precedent from PruneYard Shopping Center v. Robins—a 1980 case in which the Supreme Court ruled that  a state court decision that the California Constitution required a particular shopping center to let  a group set up a table and collect signatures when it allowed other groups to do so did not violate the First Amendment. 

In Moody, the Supreme Court explained that the latter rule applied only to situations in which the host itself was not engaged in an inherently expressive activity. That is, a social media platform deciding what user generated content to select and recommend to its users is inherently expressive, but a shopping center deciding who gets to table on your private property is not. 

So, the Supreme Court said, the 11th Circuit got it right and the 5th Circuit did not. Indeed, the 5th Circuit got it very wrong. In the Supreme Court’s words, the 5th Circuit’s opinion “rests on a serious misunderstanding of First Amendment precedent and principle.” 

This is also the position EFF has been making in courts since at least 2018. As we wrote then, “The law is clear that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate the content. The Supreme Court has long held that private publishers have a First Amendment right to control the content of their publications. Miami Herald Co. v. Tornillo, 418 U.S. 241, 254-44 (1974).” 

This is an important rule in several contexts in addition to the state must-carry laws at issue in these cases. The same rule will apply to laws that restrict the publication and recommendation of lawful speech by social media platforms, or otherwise interfere with content moderation. And it will apply to civil lawsuits brought by those whose content has been removed, demoted, or demonetized. 

Applying this rule, the Supreme Court concluded that Texas’s law could not be constitutionally applied against Facebook’s Newsfeed and YouTube’s homepage. (The Court did not specifically address Florida’s law since it was writing in the context of identifying the 5th Circuit’s errors.)

Which Services Have This First Amendment Right? 

But the Supreme Court’s ruling doesn’t make clear which other functions of which services enjoy this First Amendment right to curate. The Supreme Court specifically analyzed only Facebook’s Newsfeed and YouTube’s homepage. It did not analyze any services offered by other platforms or other functions offered through Facebook, like messaging or event management. 

The opinion does, however, identify some factors that will be helpful in assessing which online services have the right to curate. 

  • Targeting and customizing the publication of user-generated content is protected, whether by algorithm or otherwise, pursuant to the company’s own content rules, guidelines, or standards. The Supreme Court specified that it was not assessing whether the same right would apply to personalized curation decisions made algorithmically solely based on user behavior online without any reference to a site’s own standards or guidelines. 
  • Content moderation such as labeling user posts with warnings, disclaimers, or endorsements for all users, or deletion of posts, again pursuant to a site’s own rules, guidelines, or standards, is protected. 
  • The combination of multifarious voices “to create a distinctive expressive offering” or have a “particular expressive quality” based on a set of beliefs about which voices are appropriate or inappropriate, a process that is often “the product of a wealth of choices,” is protected. 
  • There is no threshold of selectivity a service must surpass to have curatorial freedom, a point we argued in our amicus brief. "That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference,” the Supreme Court said. Courts should not focus on the ratio of rejected to accepted posts in deciding whether the right to curate exists: “It is as much an editorial choice to convey all speech except in select categories as to convey only speech within them.” 
  • Curatorial freedom exists even when no one is likely to view a platform’s editorial decisions as their endorsement of the ideas in posts they choose to publish. As the Supreme Court said, “this Court has never hinged a compiler’s First Amendment protection on the risk of misattribution.” 

Considering these factors, the First Amendment right will apply to a wide range of social media services, what the Supreme Court called “Facebook Newsfeed and its ilk” or “its near equivalents.” But its application is less clear to messaging, e-commerce, event management, and infrastructure services.

The Court, Finally, Seems to Understand Content Moderation 

Also noteworthy is that in concluding that content moderation is protected First Amendment activity, the Supreme Court showed that it finally understands how content moderation works. It accurately described the process of how social media platforms decide what any user sees in their feed. For example, it wrote:

In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. 

and 

In the face of that deluge, the major platforms cull and organize uploaded posts in a variety of ways. A user does not see everything—even everything from the people she follows—in reverse-chronological order. The platforms will have removed some content entirely; ranked or otherwise prioritized what remains; and sometimes added warnings or labels. Of particular relevance here, Facebook and YouTube make some of those decisions in conformity with content-moderation policies they call Community Standards and Community Guidelines. Those rules list the subjects or messages the platform prohibits or discourages—say, pornography, hate speech, or misinformation on select topics. The rules thus lead Facebook and YouTube to remove, disfavor, or label various posts based on their content. 

This comes only a year after Justice Kagan, who wrote this opinion, remarked of the Supreme Court during another oral argument that, “These are not, like, the nine greatest experts on the internet.” In hindsight, that statement seems more of a comment on her colleagues’ understanding than her own. 

Importantly, the Court has now moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Court used that language to describe the process in last term’s case, Twitter v. Taamneh. It is now clear that in the Taamneh case, the court was referring to Twitter’s passive relationship with ISIS, in that Twitter treated it like any other account holder, a relationship that did not support the terrorism aiding and abetting claims made in that case. 

Supreme Court Suggests Competition Law to Address Undue Market Influences 

Another important element of the Supreme Court’s analysis is its treatment of the posited rationale for both states’ speech restrictions: the need to improve or better balance the marketplace of ideas. Both laws were passed in response to perceived censorship of conservative voices, and the states sought to eliminate this perceived political bias from the platform’s editorial practices.  

The Supreme Court found that this was not a sufficiently important reason to limit speech, as is required under First Amendment scrutiny: 

However imperfect the private marketplace of ideas, here was a worse proposal—the government itself deciding when speech was imbalanced, and then coercing speakers to provide more of some views or less of others. . . . The government may not, in supposed pursuit of better expressive balance, alter a private speaker’s own editorial choices about the mix of speech it wants to convey. 

But, as EFF has consistently urged in its amicus briefs, in these cases and others, that ruling does not leave states without any way of addressing harms caused by the market dominance of certain services.   

So, it is very heartening to see the Supreme Court point specifically to competition law as an alternative. In the Supreme Court’s words, “Of course, it is critically important to have a well-functioning sphere of expression, in which citizens have access to information from many sources. That is the whole project of the First Amendment. And the government can take varied measures, like enforcing competition laws, to protect that access." 

While not mentioned, we think this same reasoning supports many data privacy laws as well.  

Nevertheless, the Court Did Not Strike Down Either Law

Despite this analysis, the Supreme Court did not strike down either law. Rather, it sent the cases back to the lower courts to decide whether the lawsuits were proper facial challenges to the law.  

A facial challenge is a lawsuit that argues that a law is unconstitutional in every one of its applications. Outside of the First Amendment, facial challenges are permissible only if there is no possible constitutional application of the law or, as the courts say, the law “lacks a plainly legitimate sweep.” However, in First Amendment cases, a special rule applies: a law may be struck down as overbroad if there are a substantial number of unconstitutional applications relative to the law’s permissible scope. 

To assess whether a facial challenge is proper, a court is thus required to do a three-step analysis. First, a court must identify a law’s “sweep,” that is, to whom and what actions it applies. Second, the court must then identify which of those possible applications are unconstitutional. Third, the court must then both quantitatively and qualitatively compare the constitutional and unconstitutional applications–principal applications of the law, that is, the ones that seemed to be the law’s primary targets, may be given greater weight in that balancing. The court will strike down the law only if the unconstitutional applications are substantially greater than the constitutional ones.  

The Supreme Court found that neither court conducted this analysis with respect to either the Florida or Texas law. So, it sent both cases back down so the lower courts could do so. Its First Amendment analysis set forth above was to guide the courts in determining which applications of the laws would be unconstitutional. The Supreme Court finds that the Texas law cannot be constitutionally applied to Facebook’s Newsfeed of YouTube’s homepage—but the lower court now needs to complete the analysis. 

While these limitations on facial challenges have been well established for some time, the Supreme Court’s focus on them here was surprising because blatantly unconstitutional laws are challenged facially all the time.  

Here, however, the Supreme Court was reluctant to apply its First Amendment analysis beyond large social media platforms like Facebook’s Newsfeed and its close equivalents. The Court was also unsure whether and how either law would be applied to scores of other online services, such as email, direct messaging, e-commerce, payment apps, ride-hailing apps, and others. It wants the lower courts to look at those possible applications first. 

This decision thus creates a perverse incentive for states to pass laws that by their language broadly cover a wide range of activities, and in doing so make a facial challenge more difficult.

For example, the Florida law defines covered social media platforms as "any information service, system, Internet search engine, or access software provider that does business in this state and provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site” which has either gross annual revenues of at least $100 million or at least 100 million monthly individual platform participants globally.

Texas HB20, by contrast, defines “social media platforms,” as “an Internet website or application that is open to the public, allows a user to create an account, and enables users to communicate with other users for the primary purpose of posting information, comments, messages, or images,” and specifically excludes ISPs, email providers, online services that are nor primarily composed of user-generated content, and to which the social aspects are incidental to a service’s primary purpose.  

Does this Make the First Amendment Analysis “Dicta”? 

Typically, language in a higher court’s opinion that is necessary to its ultimate ruling is binding on lower courts, while language that is not necessary is merely persuasive “dicta.” Here, the Supreme Court’s ruling was based on the uncertainty about the propriety of the facial challenge, and not the First Amendment issues directly. So, there is some argument that the First Amendment analysis is persuasive but not binding precedent. 

However, the Supreme Court could not responsibly remand the case back to the lower courts to consider the facial challenge question without resolving the split in the circuits, that is, the vastly different ways in which the 5th and 11th Circuits analyzed whether social media content curation is protected by the First Amendment. Without that guidance, neither court would know how to assess whether a particular potential application of the law was constitutional or not. The Supreme Court’s First Amendment analysis thus seems quite necessary and is arguably not dicta. 

 And even if the analysis is merely persuasive, six of the justices found that the editorial and curatorial freedom cases like Miami Herald Co v. Tornillo applied. At a minimum, this signals how they will rule on the issue when it reaches them again. It would be unwise for a lower court to rule otherwise, at least while those six justices remain on the Supreme Court. 

What About the Transparency Mandates

Each law also contains several requirements that the covered services publish information about their content moderation practices. Only one type of these provisions was part of the cases before the Supreme Court, a provision from each law that required covered platforms to provide the user with notice and an explanation of certain content moderation decisions.

Heading into the Supreme Court, it was unclear what legal standard applied to these speech mandates. Was it the undue burden standard, from a case called Zauderer v. Office of Disciplinary Counsel, that applies to mandated noncontroversial and factual disclosures in advertisements and other forms of commercial speech, or the strict scrutiny standard that applies to other mandated disclosures?

The Court remanded this question with the rest of the case. But it did imply, without elaboration, that the Zauderer “undue burden” standard each of the lower courts applied was the correct one.

Tidbits From the Concurring Opinions 

All nine justices on the Supreme Court questioned the propriety of the facial challenges to the laws and favored remanding the cases back to the lower courts. So, officially the case was a unanimous 9-0 decision. But there were four separate concurring opinions that revealed some differences in reasoning, with the most significant difference being that Justices Alito, Thomas, and Gorsuch disagreed with the majority’s First Amendment analysis.

Because a majority of the Supreme Court, five justices, fully supported the First Amendment analysis discussed above, the concurrences have no legal effect. There are, however, some interesting tidbits in them that give hints as to how the justices might rule in future cases.

  • Justice Barrett fully joined the majority opinion. She wrote a separate concurrence to emphasize that the First Amendment issues may play out much differently for services other than Facebook’s Newsfeed and YouTube’s homepage. She expressed a special concern for algorithmic decision-making that does not carry out the platform’s editorial policies. She also noted that a platform’s foreign ownership might affect whether the platform has First Amendment rights, a statement that pretty much everyone assumes is directed at TikTok. 
  • Justice Jackson agreed with the majority that the Miami Herald line of cases was the correct precedent and that the 11th Circuit’s interpretation of the law was correct, whereas the 5th Circuit’s was not. But she did not agree with the majority decision to apply the law to Facebook’s Newsfeed and YouTube’s home page. Rather, the lower courts should do that. She emphasized that the law might be applied differently to different functions of a single service.
  • Justice Alito, joined by Thomas and Gorsuch, emphasized his view that the majority’s First Amendment analysis is nonbinding dicta. He criticized the majority for undertaking the analysis on the record before it. But since the majority did so, he expressed his disagreement with it. He disputed that the Miami Herald line of cases was controlling and raised the possibility that the common carrier doctrine, whereby social media would be treated more like telephone companies, was the more appropriate path. He also questioned whether algorithmic moderation reflects any human’s decision-making and whether community moderation models reflect a platform’s editorial decisions or viewpoints, as opposed to the views of its users.
  • Justice Thomas fully agreed with Justice Alito but wrote separately to make two points. First, he repeated a long-standing belief that the Zauderer “undue burden” standard, and indeed the entire commercial speech doctrine, should be abandoned. Second, he endorsed the common carrier doctrine as the correct law. He also expounded on the dangers of facial challenges. Lastly, Justice Thomas seems to have moved off, at least a little, his previous position that social media platforms were largely neutral pipes that insubstantially engaged with user speech.

How the NetChoice opinion will be viewed by lower courts and what influence it will have on state legislatures and Congress, which continue to seek to interfere with content moderation processes, remains to be seen. 

But the Supreme Court has helpfully resolved a central question and provided a First Amendment framework for analyzing the legality of government efforts to dictate what content social media platforms should or should not publish. 

 

 

 

EFF to Sixth Circuit: Government Officials Should Not Have Free Rein to Block Critics on Their Social Media Accounts When Used For Governmental Purposes

Legal intern Danya Hajjaji was the lead author of this post.

The Sixth Circuit must carefully apply a new “state action” test from the U.S. Supreme Court to ensure that public officials who use social media to speak for the government do not have free rein to infringe critics’ First Amendment rights, EFF and the Knight First Amendment Institute at Columbia University said in an amicus brief.

The Sixth Circuit is set to re-decide Lindke v. Freed, a case that was recently remanded from the Supreme Court. The lawsuit arose after Port Huron, Michigan resident Kevin Lindke left critical comments on City Manager James Freed's Facebook page. Freed retaliated by blocking Lindke from being able to view, much less continue to leave critical comments on, Freed’s public profile. The dispute turned on the nature of Freed’s Facebook account, where updates on his government engagements were interwoven with personal posts.

Public officials who use social media as an extension of their office engage in “state action,” which refers to acting on the government’s behalf. They are bound by the First Amendment and generally cannot engage in censorship, especially viewpoint discrimination, by deleting comments or blocking citizens who criticize them. While social media platforms are private corporate entities, government officials who operate interactive online forums to engage in public discussions and share information are bound by the First Amendment.

The Sixth Circuit initially ruled in Freed’s favor, holding that no state action exists due to the prevalence of personal posts on his Facebook page and the lack of government resources, such as staff members or taxpayer dollars, used to operate it.  

The case then went to the U.S. Supreme Court, where EFF and the Knight Institute filed a brief urging the Court to establish a functional test that finds state action when a government official uses a social media account in furtherance of their public duties, even if the account is also sometimes used for personal purposes.

The U.S. Supreme Court crafted a new two-pronged state action test: a government official’s social media activity is state action if 1) the official “possessed actual authority to speak” on the government’s behalf and 2) “purported to exercise that authority” when speaking on social media. As we wrote when the decision came out, this state action test does not go far enough in protecting internet users who intereact with public officials online. Nevertheless, the Court has finally provided further guidance on this issue as a result.

Now that the case is back in the Sixth Circuit, EFF and the Knight Institute filed a second brief endorsing a broad construction of the Supreme Court’s state action test.

The brief argues that the test’s “authority” prong requires no more than a showing, either through written law or unwritten custom, that the official had the authority to speak on behalf of the government generally, irrespective of the medium of communication—whether an in-person press conference or social media. It need not be the authority to post on social media in particular.

For high-ranking elected officials (such as presidents, governors, mayors, and legislators) courts should not have a problem finding that they have clear and broad authority to speak on government policies and activities. The same is true for heads of government agencies who are also generally empowered to speak on matters broadly relevant to those agencies. For lower-ranking officials, courts should consider the areas of their expertise and whether their social media posts in question were related to subjects within, as the Supreme Court said, their “bailiwick.”

The brief also argues that the test’s “exercise” prong requires courts to engage in, in the words of the Supreme Court, a “fact-specific undertaking” to determine whether the official was speaking on social media in furtherance of their government duties.

This element is easily met where the social media account is owned, created, or operated by the office or agency itself, rather than the official—for example, the Federal Trade Commission’s @FTC account on X (formerly Twitter).

But when an account is owned by the person and is sometimes used for non-governmental purposes, courts must look to the content of the posts. These include those posts from which the plaintiff’s comments were deleted, or any posts the plaintiff would have wished to see or comment on had the official not blocked them entirely. Former President Donald Trump is a salient example, having routinely used his legacy @realDonaldTrump X account, rather than the government-created and operated account @POTUS, to speak in furtherance of his official duties while president.

However, it is often not easy to differentiate between personal and official speech by looking solely at the posts themselves. For example, a social media post could be either private speech reflecting personal political passions, or it could be speech in furtherance of an official’s duties, or both. If this is the case, courts must consider additional factors when assessing posts made to a mixed-use account. These factors can be an account’s appearance, such as whether government logos were used; whether government resources such as staff or taxpayer funds were used to operate the social media account; and the presence of any clear disclaimers as to the purpose of the account.

EFF and the Knight Institute also encouraged the Sixth Circuit to consider the crucial role social media plays in facilitating public participation in the political process and accountability of government officials and institutions. If the Supreme Court’s test is construed too narrowly, public officials will further circumvent their constitutional obligations by blocking critics or removing any trace of disagreement from any social media accounts that are used to support and perform their official duties.

Social media has given rise to active democratic engagement, while government officials at every level have leveraged this to reach their communities, discuss policy issues, and make important government announcements. Excessively restricting any member of the public’s viewpoints threatens public discourse in spaces government officials have themselves opened as public political forums.

Victory! Supreme Court Rules Platforms Have First Amendment Right to Decide What Speech to Carry, Free of State Mandates

The Supreme Court today correctly found that social media platforms, like newspapers, bookstores, and art galleries before them, have First Amendment rights to curate and edit the speech of others they deliver to their users, and the government has a very limited role in dictating what social media platforms must and must not publish. Although users remain understandably frustrated with how the large platforms moderate user speech, the best deal for users is when platforms make these decisions instead of the government.  

As we explained in our amicus brief, users are far better off when publishers make editorial decisions free from government mandates. Although the court did not reach a final determination about the Texas and Florida laws, it confirmed that their core provisions are inconsistent with the First Amendment when they force social media sites to publish user posts that are, at best, irrelevant, and, at worst, false, abusive, or harassing. The government’s favored speakers would be granted special access to the platforms, and the government’s disfavored speakers silenced. 

We filed our first brief advocating this position in 2018 and are pleased to see that the Supreme Court has finally agreed. 

Notably, the court emphasizes another point EFF has consistently made: that the First Amendment right to edit and curate user content does not immunize social media platforms and tech companies more broadly from other forms of regulation not related to editorial policy. As the court wrote: “Many possible interests relating to social media can meet that test; nothing said here puts regulation of NetChoice’s members off-limits as to a whole array of subjects.” The court specifically calls out competition law as one avenue to address problems related to market dominance and lack of user choice. Although not mentioned in the court’s opinion, consumer privacy laws are another available regulatory tool.  

We will continue to urge platforms large and small to adopt the Santa Clara Principles as a human rights framework for content moderation. Further, we will continue to advocate for strong consumer data privacy laws to regulate social media companies’ invasive practices, as well as more robust competition laws that could end the major platforms’ dominance.   

EFF has been urging courts to adopt this position for almost six years. We filed our first amicus brief in November 2018: https://www.eff.org/document/prager-university-v-google-eff-amicus-brief  

EFF’s must-carry laws issue page: https://www.eff.org/cases/netchoice-must-carry-litigation 

Press release for our SCOTUS amicus brief: https://www.eff.org/press/releases/landmark-battle-over-free-speech-eff-urges-supreme-court-strike-down-texas-and 

Direct link to our brief: https://www.eff.org/document/eff-brief-moodyvnetchoice

EFF Statement on Assange Plea Deal

The United States has now, for the first time in the more than 100-year history of the Espionage Act, obtained an Espionage Act conviction for basic journalistic acts. Here, Assange's Criminal Information is for obtaining newsworthy information from a source, communicating it to the public, and expressing an openness to receiving more highly newsworthy information. This sets a dangerous practical precedent, and all those who value a free press should work to make sure that it never happens again. While we are pleased that Assange can now be freed for time served and return to Australia, these charges should never have been brought.

Additional information about this charge: 

Win for Free Speech! Australia Drops Global Takedown Order Case

As we put it in a blog post last month, no single country should be able to restrict speech across the entire internet. That's why EFF celebrates the news that Australia's eSafety Commissioner is dropping its legal effort to have content on X, the website formerly known as Twitter, taken down across the globe. This development comes just days after EFF and FIRE were granted official intervener status in the case. 

In April, the Commissioner ordered X to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post in Australia, but it declined to block it elsewhere. The Commissioner then asked an Australian court to order a global takedown — securing a temporary order that was not extended. EFF moved to intervene on behalf of X, and legal action was ongoing until this week, when the Commissioner announced she would discontinue Federal Court proceedings. 

We are pleased that the Commissioner saw the error in her efforts and dropped the action. Global takedown orders threaten freedom of expression around the world, create conflicting legal obligations, and lead to the lowest common denominator of internet content being available around the world, allowing the least tolerant legal system to determine what we all are able to read and distribute online. 

As part of our continued fight against global censorship, EFF opposes efforts by individual countries to write the rules for free speech for the entire world. Unfortunately, all too many governments, even democracies, continue to lose sight of how global takedown orders threaten free expression for us all. 

U.S. Supreme Court Does Not Go Far Enough in Determining When Government Officials Are Barred from Censoring Critics on Social Media

After several years of litigation across the federal appellate courts, the U.S. Supreme Court in a unanimous opinion has finally crafted a test that lower courts can use to determine whether a government official engaged in “state action” such that censoring individuals on the official’s social media page—even if also used for personal purposes—would violate the First Amendment.

The case, Lindke v. Freed, came out of the Sixth Circuit and involves a city manager, while a companion case called O'Connor-Ratcliff v. Garnier came out of the Ninth Circuit and involves public school board members.

A Two-Part Test

The First Amendment prohibits the government from censoring individuals’ speech in public forums based on the viewpoints that individuals express. In the age of social media, where people in government positions use public-facing social media for both personal, campaign, and official government purposes, it can be unclear whether the interactive parts (e.g., comments section) of a social media page operated by someone who works in government amount to a government-controlled public forum subject to the First Amendment’s prohibition on viewpoint discrimination. Another way of stating the issue is whether a government official who uses a social media account for personal purposes is engaging in state action when they also use the account to speak about government business.  

As the Supreme Court states in the Lindke opinion, “Sometimes … the line between private conduct and state action is difficult to draw,” and the question is especially difficult “in a case involving a state or local official who routinely interacts with the public.”

The Supreme Court announced a fact-intensive test to determine if a government official’s speech on social media counts as state action under the First Amendment. The test includes two required elements:

  • the official “possessed actual authority to speak” on the government’s behalf, and
  • the official “purported to exercise that authority when he spoke on social media.”

Although the court’s opinion isn’t as generous to internet users as we had asked for in our amicus brief, it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

This issue has been percolating in the courts since at least 2016. Perhaps most famously, the Knight First Amendment Institute at Columbia University and others sued then-president Donald Trump for blocking many of the plaintiffs on Twitter. In that case, the U.S. Court of Appeals for the Second Circuit affirmed a district court’s holding that President Trump’s practice of blocking critics from his Twitter account violated the First Amendment. EFF has also represented PETA in two cases against Texas A&M University.

Element One: Does the official possess actual authority to speak on the government’s behalf?

There is some ambiguity as to what specific authority the Supreme Court believes the government official must have. The opinion is unclear whether the authority is simply the general authority to speak officially on behalf of the public entity, or instead the specific authority to speak officially on social media. On the latter framing, the opinion, for example, discusses the authority “to post city updates and register citizen concerns,” and the authority “to speak for the [government]” that includes “the authority to do so on social media….” The broader authority to generally speak on behalf of the government would be easier to prove for plaintiffs and should always include any authority to speak on social media.

Element One Should Be Interpreted Broadly

We will urge the lower courts to interpret the first element broadly. As we emphasized in our amicus brief, social media is so widely used by government agencies and officials at all levels that a government official’s authority generally to speak on behalf of the public entity they work for must include the right to use social media to do so. Any other result does not reflect the reality we live in.

Moreover, plaintiffs who are being censored on social media are not typically commenting on the social media pages of low-level government employees, say, the clerk at the county tax assessor’s office, whose authority to speak publicly on behalf of their agency may be questionable. Plaintiffs are instead commenting on the social media pages of people in leadership positions, who are often agency heads or in elected positions and who surely should have the general authority to speak for the government.

“At the same time,” the Supreme Court cautions, “courts must not rely on ‘excessively broad job descriptions’ to conclude that a government employee is authorized to speak” on behalf of the government. But under what circumstances would a court conclude that a government official in a leadership position does not have such authority? We hope these circumstances are few and far between for the sake of plaintiffs seeking to vindicate their First Amendment rights.

When Does the Use of a New Communications Technology Become So “Well Settled” That It May Fairly Be Considered Part of a Government Official’s Public Duties?

If, on the other hand, the lower courts interpret the first element narrowly and require plaintiffs to provide evidence that the government official who censored them had authority to speak on behalf of the agency on social media specifically, this will be more difficult to prove.

One helpful aspect of the court’s opinion is that the government official’s authority to speak (however that’s defined) need not be written explicitly in their job description. This is in contrast to what the Sixth Circuit had, essentially, held. The authority to speak on behalf of the government, instead, may be based on “persistent,” “permanent,” and “well settled” “custom or usage.”  

We remain concerned, however, that if there is a narrower requirement that the authority must be to speak on behalf of the government via a particular communications technology—in this case, social media—then at what point does the use of a new technology become so “well settled” for government officials that it is fair to conclude that it is within their public duties?

Fortunately, the case law on which the Supreme Court relies does not require an extended period of time for a government practice to be deemed a legally sufficient “custom or usage.” It would not make sense to require an ages-old custom and usage of social media when the widespread use of social media within the general populace is only a decade and a half old. Ultimately, we will urge lower courts to avoid this problem and broadly interpret element one.

Government Officials May Be Free to Censor If They Speak About Government Business Outside Their Immediate Purview

Another problematic aspect of the Supreme Court’s opinion within element one is the additional requirement that “[t]he alleged censorship must be connected to speech on a matter within [the government official’s] bailiwick.”

The court explains:

For example, imagine that [the city manager] posted a list of local restaurants with health-code violations and deleted snarky comments made by other users. If public health is not within the portfolio of the city manager, then neither the post nor the deletions would be traceable to [his] state authority—because he had none.

But the average constituent may not make such a distinction—nor should they. They would simply see a government official talking about an issue generally within the government’s area of responsibility. Yet under this interpretation, the city manager would be within his right to delete the comments, as the constituent could not prove that the issue was within that particular government official’s purview, and they would thus fail to meet element one.

Element Two: Did the official purport to exercise government authority when speaking on social media?

Plaintiffs Are Limited in How a Social Media Account’s “Appearance and Function” Inform the State Action Analysis

In our brief, we argued for a functional test, where state action would be found if a government official were using their social media account in furtherance of their public duties, even if they also used that account for personal purposes. This was essentially the standard that the Ninth Circuit adopted, which included looking at, in the words of the Supreme Court, “whether the account’s appearance and content look official.” The Supreme Court’s two-element test is more cumbersome for plaintiffs. But the upside is that the court agrees that a social media account’s “appearance and function” is relevant, even if only with respect to element two.

Reality of Government Officials Using Both Personal and Official Accounts in Furtherance of Their Public Duties Is Ignored

Another problematic aspect of the Supreme Court’s discussion of element two is that a government official’s social media page would amount to state action if the page is the “only” place where content related to government business is located. The court provides an example: “a mayor would engage in state action if he hosted a city council meeting online by streaming it only on his personal Facebook page” and it wasn’t also available on the city’s official website. The court further discusses a new city ordinance that “is not available elsewhere,” except on the official’s personal social media page. By contrast, if “the mayor merely repeats or shares otherwise available information … it is far less likely that he is purporting to exercise the power of his office.”

This limitation is divorced from reality and will hamstring plaintiffs seeking to vindicate their First Amendment rights. As we showed extensively in our brief (see Section I.B.), government officials regularly use both official office accounts and “personal” accounts for the same official purposes, by posting the same content and soliciting constituent feedback—and constituents often do not understand the difference.

Constituent confusion is particularly salient when government officials continue to use “personal” campaign accounts after they enter office. The court’s conclusion that a government official “might post job-related information for any number of personal reasons, from a desire to raise public awareness to promoting his prospects for reelection” is thus highly problematic. The court is correct that government officials have their own First Amendment right to speak as private citizens online. However, their constituents should not be subject to censorship when a campaign account functions the same as a clearly official government account.

An Upside: Supreme Court Denounces the Blocking of Users Even on Mixed-Use Social Media Accounts

One very good aspect of the Supreme Court’s opinion is that if the censorship amounted to the blocking of a plaintiff from engaging with the government official’s social media page as a whole, then the plaintiff must merely show that the government official “had engaged in state action with respect to any post on which [the plaintiff] wished to comment.”  

The court further explains:

The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public of­ficial might be unable to prevent someone from commenting on his personal posts without risking liability for also pre­venting comments on his official posts. A public official who fails to keep personal posts in a clearly designated per­sonal account therefore exposes himself to greater potential liability.

We are pleased with this language and hope it discourages government officials from engaging in the most egregious of censorship practices.

The Supreme Court also makes the point that if the censorship was the deletion of a plaintiff’s individual comments under a government official’s posts, then those posts must each be analyzed under the court’s new test to determine whether a particular post was official action and whether the interactive spaces that accompany it are government forums. As the court states, “it is crucial for the plaintiff to show that the official is purporting to exercise state authority in specific posts.” This is in contrast to the Sixth Circuit, which held, “When analyzing social-media activity, we look to a page or account as a whole, not each individual post.”

The Supreme Court’s new test for state action unfortunately puts a thumb on the scale in favor of government officials who wish to censor constituents who engage with them on social media. However, the test does chart a path forward on this issue and should be workable if lower courts apply the test with an eye toward maximizing constituents’ First Amendment rights online.

❌