Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes

Update: After this blog post was published (addressing Meta's blog post here), we learned Meta also revised its public "Hateful Conduct" policy in ways EFF finds concerning. We address these changes in this blog post, published January 9, 2025.

In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political topics. 

Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial. 

So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an available tool. Meta does not have to, and should not, choose one system to the exclusion of the other. 

Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence, and sex work.  

Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from perceived “California bias” to perceived “Texas bias.” 

Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not merely a political maneuver to accommodate the upcoming U.S. administration change. 

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

In These Five Social Media Speech Cases, Supreme Court Set Foundational Rules for the Future

The U.S. Supreme Court addressed government’s various roles with respect to speech on social media in five cases reviewed in its recently completed term. The through-line of these cases is a critically important principle that sets limits on government’s ability to control the online speech of people who use social media, as well as the social media sites themselves: internet users’ First Amendment rights to speak on social media—whether by posting or commenting—may be infringed by the government if it interferes with content moderation, but will not be infringed by the independent decisions of the platforms themselves.

As a general overview, the NetChoice cases, Moody v. NetChoice and NetChoice v. Paxton, looked at government’s role as a regulator of social media platforms. The issue was whether state laws in Texas and Florida that prevented certain online services from moderating content were constitutional in most of their possible applications. The Supreme Court did not rule on that question and instead sent the cases back to the lower courts to reexamine NetChoice’s claim that the statutes had few possible constitutional applications.

The court did, importantly and correctly, explain that at least Facebook’s Newsfeed and YouTube’s Homepage were examples of platforms exercising their own First Amendment rights on how to display and organize content, and the laws could not be constitutionally applied to Newsfeed and Homepage and similar sites, a preliminary step in determining whether the laws were facially unconstitutional.

Lindke v. Freed and Garnier v. O’Connor-Ratcliffe looked at the government’s role as a social media user who has an account and wants to use its full features, including blocking other users and deleting comments. The Supreme Court instructed the lower courts to first look to whether a government official has the authority to speak on behalf of the government, before looking at whether the official used their social media page for governmental purposes, conduct that would trigger First Amendment protections for the commenters.

Murthy v. Missouri, the jawboning case, looked at the government’s mixed role as a regulator and user, in which the government may be seeking to coerce platforms to engage in unconstitutional censorship or may also be a user simply flagging objectionable posts as any user might. The Supreme Court found that none of the plaintiffs had standing to bring the claims because they could not show that their harms were traceable to any action by the federal government defendants.

We’ve analyzed each of the Supreme Court decisions, Moody v. NetChoice (decided with NetChoice v. Paxton), Murthy v. Missouri, and Lindke v. Freed (decided with Garnier v. O’Connor Ratcliffe), in depth.

But some common themes emerge when all five cases are considered together.

  • Internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to  interfere with content moderation, but it will not be infringed  by the independent decisions of the platforms themselves. This principle, which EFF has been advocating for many years, is evident in each of the rulings. In Lindke, the Supreme Court recognized that government officials, if vested with and exercising official authority, could violate the First Amendment by deleting a user’s comments or blocking them from commenting altogether. In Murthy, the Supreme Court found that users could not sue the government for violating their First Amendment rights unless they could show that government coercion lead to their content being taken down or obscured, rather than the social media platform’s own editorial decision. And in the NetChoice cases, the Supreme Court explained that social media platforms typically exercise their own protected First Amendment rights when they edit and curate which posts they show to their users, and the government may violate the First Amendment when it requires them to publish or amplify posts.

  • Underlying these rulings is the Supreme Court’s long-awaited recognition that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and are often guided in this process by their own community standards or similar editorial policies. This is seen in the Supreme Court’s emphasis in Murthy that jawboning is not actionable if the content moderation was the independent decision of the platform rather than coerced by the government. And a similar recognition of independent decision-making underlies the Supreme Court’s First Amendment analysis in the NetChoice cases. The Supreme Court has now thankfully moved beyond the idea that content moderation is largely passive and indifferent, a concern that had been raised after the Supreme Court used that language to describe the process in last term’s case, Twitter v. Taamneh.

  • This terms cases also confirm that traditional First Amendment rules apply to social media. In Lindke, the Supreme Court recognized that when government controls the comments components of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. In the NetChoice cases, the Supreme Court found that platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Plenty of legal issues around social media remain to be decided. But the 2023-24 Supreme Court term has set out important speech-protective rules that will serve as the foundation for many future rulings. 

 

The Global Suppression of Online LGBTQ+ Speech Continues

A global increase in anti-LGBTQ+ intolerance is having a significant impact on digital rights. As we wrote last year, censorship of LGBTQ+ websites and online content is on the rise. For many LGBTQ+ individuals the world over, the internet can be a safer space for exploring identity, finding community, and seeking support. But with anti-LGBTQ+ bills restricting free expression and privacy to content moderation decisions that disproportionately impact LGBTQ+ users, digital spaces that used to seem like safe havens are, for many, no longer so.

EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world, and that includes LGBTQ+ communities, which all too often face threats, censorship, and other risks when they go online. This Pride month—and the rest of the year—we’re highlighting some of those risks, and what we’re doing to help change online spaces for the better.

Worsening threats in the Americas

In the United States, where EFF is headquartered, recent gains in rights have been followed by an uptick in intolerance that has led to legislative efforts, mostly at the state level. In 2024 alone, 523 anti-LGBTQ+ bills have been proposed by state legislatures, many of which restrict freedom of expression. In addition to these bills, a drive in mostly conservative areas to ban books in school libraries—many of which contain LGBTQ themes—is creating an environment in which queer youth feel even more marginalized.

At the national level, an effort to protect children from online harms—the Kids Online Safety Act (KOSA)—risks alienating young people, particularly those from marginalized communities, by restricting their access to certain content on social media. EFF spoke with young people about KOSA, and found that many are concerned that they will lose access to help, education, friendship, and a sense of belonging that they have found online. At a time when many young people have just come out of several years of isolation during the pandemic and reliance on online communities for support, restricting their access could have devastating consequences.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Similarly, age-verification bills being put forth by state legislatures often seek to prevent access to material deemed harmful to minors. If passed, these measures would restrict access to vital content, including education and resources that LGBTQ+ youth without local support often rely upon. These bills often contain vague and subjective definitions of “harm” and are all too often another strategy in the broader attack on free expression that includes book bans, censorship of reproductive health information, and attacks on LGBTQ+ youth.

Moving south of the border, in much of South and Central America, legal progress has been made with respect to rights, but violence against LGBTQ+ people is particularly high, and that violence often has online elements to it. In the Caribbean, where a number of countries have strict anti-LGBTQ+ laws on the books often stepping from the colonial era, online spaces can be risky and those who express their identities in them often face bullying and doxxing, which can lead to physical harm.

In many other places throughout the world, the situation is even worse. While LGBTQ+ rights have progressed considerably over the past decade in a number of democracies, the sense of freedom and ease that these hard-won freedoms created for many are suffering serious setbacks. And in more authoritarian countries where the internet may have once been a lifeline, crackdowns on expression have coincided with increases in user growth and often explicitly target LGBTQ+ speech.

In Europe, anti-LGBTQ+ violence at a record high

In recent years, legislative efforts aimed at curtailing LGBTQ+ rights have gained momentum in several European countries, largely the result of a rise in right-wing populism and conservatism. In Hungary, for instance, the Orban government has enacted laws that restrict LGBTQ+ rights under the guise of protecting children. In 2021, the country passed a law banning the portrayal or promotion of LGBTQ+ content to minors. In response, the European Commission launched legal cases against Hungary—as well as some regions in Poland—over LGBTQ+ discrimination, with Commission President Ursula von der Leyen labeling the law as "a shame" and asserting that it clearly discriminates against people based on their sexual orientation, contravening the EU's core values of equality and human dignity​.

In Russia, the government has implemented severe restrictions on LGBTQ+ content online. A law initially passed in 2013 banning the promotion of “non-traditional sexual relations” among minors was expanded in 2022 to apply to individuals of all ages, further criminalizing LGBTQ+ content. The law prohibits the mention or display of LGBTQ+ relationships in advertising, books, media, films, and on online platforms, and has created a hostile online environment. Media outlets that break the law can be fined or shut down by the government, while foreigners who break the law can be expelled from the country. 

Among the first victims of the amended law were seven migrant sex workers—all trans women—from Central Asia who were fined and deported in 2023 after they published their profiles on a dating website. Also in 2023, six online streaming platforms were penalised for airing movies with LGBTQ-related scenes. The films included “Bridget Jones: The Edge of Reason”, “Green Book”, and the Italian film “Perfect Strangers.”

Across the continent, as anti-LGBTQ+ violence is at a record high, queer communities are often the target of online threats. A 2022 report by the European Digital Media Observatory reported a significant increase in online disinformation campaigns targeting LGBTQ+ communities, which often frame them as threats to traditional family values. 

Across Africa, LGBTQ+ rights under threat

In 30 of the 54 countries on the African continent, homosexuality is prohibited. Nevertheless, there is a growing movement to decriminalize LGBTQ+ identities and push toward achieving greater rights and equality. As in many places, the internet often serves as a safer space for community and organizing, and has therefore become a target for governments seeking to crack down on LGBTQ+ people.

In Tanzania, for instance, where consensual same-sex acts are prohibited under the country’s colonial-era Penal Code, authorities have increased digital censorship against LGBTQ+ content, blocking websites and social media platforms that provide support and information to the LGBTQ+ community .This crackdown is making it increasingly difficult for people to find safe spaces online. As a result of these restrictions, many online groups used by the LGBTQ+ community for networking and support have been forced to disband, driving individuals to riskier public spaces to meet and socialize​. 

In other countries across the continent, officials are weaponizing legal systems to crack down on LGBTQ+ people and their expression. According to Access Now, a proposed law in Kenya, the Family Protection Bill, seeks to ban a variety of actions, including public displays of affection, engagement in activities that seek to change public opinion on LGBTQ+ issues, and the use of the internet, media, social media platforms, and electronic devices to “promote homosexuality.” Furthermore, the prohibited acts would fall under the country’s Computer Misuse and Cybercrimes Act of 2018, giving law enforcement the power to monitor and intercept private communications during investigations, as provided by Section 36 of the National Intelligence Service Act, 2012. 

A draconian law passed in Uganda in 2023, the Anti-Homosexuality Act, introduced capital punishment for certain acts, while allowing for life imprisonment for others. The law further imposes a 20-year prison sentence for people convicted of “promoting homosexuality,” which includes the publication of LGBTQ+ content, as well as “the use of electronic devices such as the internet, mobile phones or films for the purpose of homosexuality or promoting homosexuality.”

In Ghana, if passed, the anti-LGBTQ+ Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill would introduce prison sentences for those who engage in LGBTQ+ sexual acts as well as those who promote LGBTQ+ rights. As we’ve previously written, ban all speech and activity on and offline that even remotely supports LGBTQ+ rights. Though the bill passed through parliament in March, he won’t sign the bill until the country’s Supreme Court rules on its constitutionality.

And in Egypt and Tunisia, authorities have integrated technology into their policing of LGBTQ+ people, according to a 2023 Human Rights Watch report. In Tunisia, where homosexuality is punishable by up to three years in prison, online harassment and doxxing are common, threatening the safety of LGBTQ+ individuals. Human Rights Watch has documented cases in which social media users, including alleged police officers, have publicly harassed activists, resulting in offline harm.

Egyptian security forces often monitor online LGBTQ+ activity and have used social media platforms as well as Grindr to target and arrest individuals. Although same-sex relations are not explicitly banned by law in the country, authorities use various morality provisions to effectively criminalize homosexual relations. More recently, prosecutors have utilized cybercrime and online morality laws to pursue harsher sentences.

In Asia, Cybercrime laws threaten expression

LGBTQ+ rights in Asia vary widely. While homosexual relations are legal in a majority of countries, they are strictly banned in twenty, and same-sex marriage is only legal in three—Taiwan, Nepal, and Thailand. Online threats are also varied, ranging from harassment and self-censorship to the censoring of LGBTQ+ content—such as in Indonesia, Iran, China, Saudi Arabia, the UAE, and Malaysia, among other nations—as well as legal restrictions with often harsh penalties.

The use of cybercrime provisions to target LGBTQ+ expression is on the rise in a number of countries, particularly in the MENA region. In Jordan, the Cybercrime Law of 2023, passed last August, imposes restrictions on freedom of expression, particularly for LGBTQ+ individuals. Articles 13 and 14 of the law impose penalties for producing, distributing, or consuming “pornographic activities or works” and for using information networks to “facilitate, promote, incite, assist, or exhort prostitution and debauchery, or seduce another person, or expose public morals.” Jordan follows in the footsteps of neighboring Egypt, which instituted a similar law in 2018.

The LGBTQ+ movement in Bangladesh is impacted by the Cyber Security Act, quietly passed in 2023. Several provisions of the Act can be used to target LGBTQ+ sites; Section 8 enables the government to shut down websites, while section 42 grants law enforcement agencies the power to search and seize a person’s hardware, social media accounts, and documents, both online and offline, without a warrant. And section 25 criminalizes published contents that tarnish the image or reputation of the country.

The online struggle is global

In addition to national-level restrictions, LGBTQ+ individuals often face content suppression on social media platforms. While some of this occurs as the result of government requests, much of it is actually due to platforms’ own policies and practices. A recent GLAAD case study points to specific instances where content promoting or discussing LGBTQ+ issues is disproportionately flagged and removed, compared to non-LGBTQ+ content. The GLAAD Social Media Safety Index also provides numerous examples where platforms inconsistently enforce their policies. For instance, posts that feature LGBTQ+ couples or transgender individuals are sometimes taken down for alleged policy violations, while similar content featuring heterosexual or cisgender individuals remains untouched. This inconsistency suggests a bias in content moderation that EFF has previously documented and leads to the erasure of LGBTQ+ voices in online spaces.

Likewise, the community now faces threats at the global level, in the form of the impending UN Cybercrime Convention, currently in negotiations. As we’ve written, the Convention would expand cross-border surveillance powers, enabling nations to potentially exploit these powers to probe acts they controversially label as crimes based on subjective moral judgements rather than universal standards. This could jeopardize vulnerable groups, including the LGBTQ+ community.

EFF is pushing back to ensure that the Cybercrime Treaty's scope must be narrow, and human rights safeguards must be a priority. You can read our written and oral interventions and follow our Deeplinks Blog for updates. Earlier this year, along with Access Now, we also submitted comment to the U.N. Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) to inform the Independent Expert’s thematic report presented to the U.N. Human Rights Council at its fifty-sixth session.

But just as the struggle for LGBTQ+ rights and recognition is global, so too is the struggle for a safer and freer internet. EFF works year round to highlight that struggle and to ensure LGBTQ+ rights are protected online. We collaborate with allies around the world, and work to ensure that both states and companies protect and respect the rights of LGBTQ+ communities worldwide.

We also want to help LGBTQ+ communities stay safer online. As part of our Surveillance Self-Defense project, we offer a number of guides for safer online communications, including a guide specifically for LGBTQ+ youth.

EFF believes in preserving an internet that is free for everyone. While there are numerous harms online as in the offline world, digital spaces are often a lifeline for queer youth, particularly those living in repressive environments. The freedom of discovery, the sense of community, and the access to information that the internet has provided for so many over the years must be preserved. 



Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction

EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations.

Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties.

The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression.

The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote:

The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination.

In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking.

In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF.

We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.

Protect Yourself from Election Misinformation

Welcome to your U.S. presidential election year, when all kinds of bad actors will flood the internet with election-related disinformation and misinformation aimed at swaying or suppressing your vote in November. 

So… what’re you going to do about it? 

As EFF’s Corynne McSherry wrote in 2020, online election disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events. Still, election misinformation and disinformation continue to proliferate online and off. 

That being said, regulation is not typically an effective or human rights-respecting way to address election misinformation. Even well-meaning efforts to control election misinformation through regulation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression. Indeed, any content regulation must be scrutinized to avoid inadvertently affecting meaningful expression: Is the approach narrowly tailored or a categorical ban? Does it empower users? Is it transparent? Is it consistent with human rights principles? 

 While platforms and regulators struggle to get it right, internet users must be vigilant about checking the election information they receive for accuracy. There is help. Nonprofit journalism organization ProPublica published a handy guide about how to tell if what you’re reading is accurate or “fake news.” The International Federation of Library Associations and Institutions infographic on How to Spot Fake News is a quick and easy-to-read reference you can share with friends:

To make sure you’re getting good information about how your election is being conducted, check in with trusted sources including your state’s Secretary of State, Common Cause, and other nonpartisan voter protection groups, or call or text 866-OUR-VOTE (866-687-8683) to speak with a trained election protection volunteer. 

And if you see something, say something: You can report election disinformation at https://reportdisinfo.org/, a project of the Common Cause Education Fund. 

 EFF also offers some election-year food for thought: 

  • On EFF’s “How to Fix the Internet” podcast, Pamela Smith—president and CEO of Verified Voting—in 2022 talked with EFF’s Cindy Cohn and Jason Kelley about finding reliable information on how your elections are conducted, as part of ensuring ballot accessibility and election transparency.
  • Also on “How to Fix the Internet”, Alice Marwick—cofounder and principal researcher at the University of North Carolina, Chapel Hill’s Center for Information, Technology and Public Life—in 2023 talked about finding ways to identify and leverage people’s commonalities to stem the flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out. She discussed why seemingly ludicrous conspiracy theories get so many views and followers; how disinformation is tied to personal identity and feelings of marginalization and disenfranchisement; and when fact-checking does and doesn’t work.
  • EFF’s Cory Doctorow wrote in 2020 about how big tech monopolies distort our public discourse: “By gathering a lot of data about us, and by applying self-modifying machine-learning algorithms to that data, Big Tech can target us with messages that slip past our critical faculties, changing our minds not with reason, but with a kind of technological mesmerism.” 

An effective democracy requires an informed public and participating in a democracy is a responsibility that requires work. Online platforms have a long way to go in providing the tools users need to discern legitimate sources from fake news. In the meantime, it’s on each of us. Don’t let anyone lie, cheat, or scare you away from making the most informed decision for your community at the ballot box. 

International Threats to Freedom of Expression: 2023 Year in Review

2023 has been an unfortunate reminder that the right to free expression is most fragile for groups on the margins, and that it can quickly become a casualty during global conflicts. Threats to speech arose out of the ongoing war in Palestine. They surfaced in bills and laws around the world that explicitly restrict LGBTQ+ freedom of expression and privacy. And past threats—and acts—were ignored by the United Nations, as the UN’s Secretary-General announced it would grant Saudi Arabia host status for the 2024 Internet Governance Forum (IGF).

LGBTQ+ Rights

Globally, an increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and off. The digital rights community has observed an uptick in censorship of LGBTQ+ websites as well as troubling attempts by several countries to pass explicitly anti-LGBTQ+ bills restricting freedom of expression and privacy—bills that also fuel offline intolerance against LGBTQ+ people, and force LGBTQ+ individuals to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One prominent example is Ghana's draconian ‘'Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill, 2021.' This year, EFF and other civil society partners continued to call on the government of Ghana to immediately reject this draconian bill and commit instead to protecting the human rights of all people in Ghana.

To learn more about this issue, read our 2023 Year in Review post on threats to LGBTQ+ speech.

Free Expression in Times of Conflict

The war in Palestine has exacerbated existing threats to free expression Palestinians already faced,, particularly those living in Gaza. Most acutely, the Israeli government began targeting telecommunications infrastructure early on in the war, inhibiting Palestinians’ ability to share information and access critical services. At the same time, platforms have failed to moderate misinformation (while overmoderating other content), which—at a time when many Palestinians can’t access the internet—has created an imbalance in information and media coverage.

EFF teamed up with a number of other digital rights organizations—including 7amleh, Access Now, Amnesty International, and Article 19—to demand that Meta take steps to ensure Palestinian content is moderated fairly. This effort follows the 2021 campaign of the same name.

The 2024 Internet Governance Forum

Digital rights organizations were shocked to learn in October that the 2024 Internet Governance Forum is slated to be held in Saudi Arabia. Following the announcement, we joined numerous digital rights organizations in calling on the United Nations to reverse their decision.

EFF has, for many years, expressed concern about the normalization of the government of Saudi Arabia by Silicon Valley companies and the global community. In recent years, the Saudi government has spied on its own citizens on social media and through the use of spyware; imprisoned Wikipedia volunteers for their contributions to access to information on the platform; sentenced a PhD student and mother of two to 34 years in prison and a subsequent travel ban of the same length; and sentenced a teacher to death for his posts on social media.

The UK Threatens Expression

We have been disheartened this year to see the push in the UK to pass its Online Safety Bill. EFF has long opposed the legislation, and throughout 2023 we stressed that mandated scanning obligations will lead to censorship of lawful and valuable expression. The Online Safety Bill also threatens another basic human right: our right to have a private conversation. From our point of view, the UK pushed the Bill through aware of the damage it would cause.

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September. But the story doesn't end here. The Online Safety Act remains vague about what exactly it requires of platforms and users alike, and Ofcom must now draft regulations to operationalize the legislation. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

New Hope for Alaa Abd El Fattah Case

While 2023 has overall been a disappointing year for free expression, there is always hope, and for us this has come in the form of renewed efforts to free our friend and EFF Award Winner, Alaa Abd El Fattah

This year, on Alaa’s 42nd birthday (and his tenth in prison), his family filed a new petition to the UN Working Group on Arbitrary Detention in the hopes of finally securing his release. This latest appeal comes after Alaa spent more than half of 2022 on a hunger strike in protest of his treatment in prison, which he started on the first day of Ramadan. A few days after the strike began, on April 11, Alaa’s family announced that he had become a British citizen through his mother. There was hope last year, following a groundswell of protests that began in the summer and extended to the COP27 conference, that the UK foreign secretary could secure his release, but so far, this has not happened. Alaa's hunger strike did result in improved prison conditions and family visitation rights, but only after it prompted protests and fifteen Nobel Prize laureates demanded his release.

This holiday season, we are hoping that Alaa can finally be reunited with his family.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Digital Rights Groups Urge Meta to Stop Silencing Palestine

Legal intern Muhammad Essa Fasih contributed to this post.

In the wake of the October 7 attack on Israel and the ensuing backlash on Palestine, Meta has engaged in unjustified content and account takedowns on its social media platforms. This has suppressed the voices of journalists, human rights defenders, and many others concerned or directly affected by the war. 

This is not the first instance of biased moderation of content related to Palestine and the broader MENA region. EFF has documented numerous instances over the past decade in which platforms have seemingly turned their backs on critical voices in the region. In 2021, when Israel was forcibly evicting Palestinian families from their homes in Jerusalem, international digital and human rights groups including EFF partnered in a campaign to hold Meta to account. These demands were backed by prominent signatories, and later echoed by Meta’s Oversight Board.

The campaign—along with other advocacy efforts—led to Meta agreeing to an independent review of its content moderation activities in Israel and Palestine, published in October 2022 by BSR. The BSR audit was a welcome development in response to our original demands; however, we are yet to see its recommendations fully implemented in Meta’s policies and practices.

The rest of our demands went unmet. Therefore, in the context of the current crackdown on pro-Palestinian voices, EFF and 17 other digital and human rights organizations are  issuing an updated set of demands to ensure that Meta considers the impact of its policies and content moderation practices on Palestinians, and takes serious action to ensure that its content interventions are fair, balanced, and consistent with the Santa Clara Principles on Transparency and Accountability in Content Moderation. 

Why it matters

The campaign is crucial for many reasons ranging from respect for free speech and equality to prevention of violence.

Free public discourse plays an important role in global conflicts in that it has the ability to affect the decision making of those occupying decisive positions. Dissemination of information and public opinion can reflect the majority opinion and can build the necessary pressure on individuals in positions of power to make democratic and humane decisions. Borderless platforms like Meta, therefore, have colossal power to shape narratives across the globe. In order to reflect a true picture of the majority public opinion, it is essential that these platforms allow for a level playing field for all sides of a conflict.

These leviathan platforms have the power and responsibility to refuse to succumb to unjustifiable government demands intended to skew the discourse in favor of the latter’s geopolitical and economic interests. There is already a significant imbalance between the government of Israel and the Palestinian people, particularly in their economic and geopolitical influence. Adding to that, suppression of information coming out of or about the weaker party has the potential to aid and abet further suffering.

Meta’s censorship of content showing the scale of current devastation and suffering in Palestine by loosely using categories like nudity, sexual activity, and graphic content, in a situation where the UN is urging the entire international community to work to "mitigate the risk of genocide", interferes with the right to information and free expression at a time when those rights are more needed than ever. According to some estimates over 90% of pro-Palestinian content has been deleted following Israel’s requests since October 7.

As we’ve said many times before, content moderation is impossible at scale, but clear signs and a record of discrimination against certain groups escapes justification and needs to be addressed immediately.

In the light of all this, it is imperative that interested organizations continue to play their role in holding Meta to account for such glaring discrimination. Meta must cooperate and meet these reasonable demands if it wants to present itself as a platform that respects free speech. It is about time that Mark Zuckerberg started to back his admiration for Frederick Douglass’ quote on free speech with some material practice.

 



Platforms Must Stop Unjustified Takedowns of Posts By and About Palestinians

Legal intern Muhammad Essa Fasih contributed to this post.

Social media is a crucial means of communication in times of conflict—it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity. Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.

In the weeks since war between Hamas and Israel began,
social media platforms have removed content from or suspended accounts of Palestinian news sites, activists, journalists, students, and Arab citizens in Israel, interfering with the dissemination of news about the conflict and silencing voices expressing concern for Palestinians.

The platforms say some takedowns were caused by security issues, technical glitches, mistakes that have been fixed, or stricter rules meant to reduce hate speech. But users complain of
unexplained removals of posts about Palestine since the October 7 Hamas terrorist attacks.

Meta’s Facebook
shut down the page of independent Palestinian website Quds News Network, a primary source of news for Palestinians with 10 million followers. The network said its Arabic and English news pages had been deleted from Facebook, though it had been fully complying with Meta's defined media standards. Quds News Network has faced similar platform censorship before—in 2017, Facebook censored its account, as did Twitter in 2020.

Additionally, Meta’s
Instagram has locked or shut down accounts with significant followings. Among these are Let’s Talk Palestine, an account with over 300,000 followers that shows pro-Palestinian informative content, and Palestinian media outlet 24M. Meta said the accounts were locked for security reasons after signs that they were compromised.

The account of the news site Mondoweiss was also 
banned by Instagram and taken down on TikTok, later restored on both platforms.

Meanwhile, Instagram, Tiktok, and LinkedIn users sympathetic to or supportive of the plight of Palestinians have
complained of “shadow banning,” a process in which the platform limits the visibility of a user's posts without notifying them. Users say the platform limited the visibility of posts that contained the Palestinian flag.

Meta has
admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules. Responding to a surge in hate speech after Oct.7, the company lowered the threshold for predicting whether comments qualify as harassment or incitement to violence from 80 percent to 25 percent for users in Palestinian territories. Some content creators are using code words and emojis and shifting the spelling of certain words to evade automated filtering. Meta needs to be more transparent about decisions that downgrade users’ speech that does not violate its rules.

For some users, posts have led to more serious consequences. Palestinian citizens of Israel, including well-known singer Dalal Abu Amneh from Nazareth,
have been arrested for social media postings about the war in Gaza that are alleged to express support for the terrorist group Hamas.

Amneh’s case demonstrates a disturbing trend concerning social media posts supporting Palestinians. Amneh’s post of the
Arabic motto “There is no victor but God” and the Palestinian flag was deemed as incitement. Amneh, whose music celebrates Palestinian heritage, was expressing religious sentiment, her lawyer said, not calling for violence as the police claimed.

She
received hundreds of death threats and filed a complaint with Israeli police, only to be taken into custody. Her post was removed. Israeli authorities are treating any expression of support or solidarity with Palestinians as illegal incitement, the lawyer said.

Content moderation does not work at scale even in the best of times, as we have said
repeatedly. At all times, mistakes can lead to censorship; during armed conflicts they can have devastating consequences.

Whether through content moderation or technical glitches, platforms may also unfairly label people and communities. Instagram, for example, inserted the word “terrorist” into the profiles of some Palestinian users when its auto-translation converted the Palestinian flag emoji followed by the Arabic word for “Thank God” into “Palestinian terrorists are fighting for their freedom.” Meta 
apologized for the mistake, blaming it on a bug in auto-translation. The translation is now “Thank God.”

Palestinians have long fought 
private censorship, so what we are seeing now is not particularly new. But it is growing at a time when online speech protections are sorely needed. We call on companies to clarify their rules, including any specific changes that have been made in relation to the ongoing war, and to stop the knee jerk reaction to treat posts expressing support for Palestinians—or notifying users of peaceful demonstrations, or documenting violence and the loss of loved ones—as incitement and to follow their own existing standards to ensure that moderation remains fair and unbiased.

Platforms should also follow the 
Santa Clara Principles on Transparency and Accountability in Content Moderation notify users when, how, and why their content has been actioned, and give them  the opportunity to appeal. We know Israel has worked directly with Facebook, requesting and garnering removal of content it deemed incitement to violence, suppressing posts by Palestinians about human rights abuses during May 2021 demonstrations that turned violent.

The horrific violence and death in Gaza is heartbreaking. People are crying out to the world, to family and friends, to co-workers, religious leaders, and politicians their grief and outrage. Labeling large swaths of this outpouring of emotion by Palestinians as incitement is unjust and wrongly denies people an important outlet for expression and solace.

❌