Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

On Alaa Abd El Fattah’s 43rd Birthday, the Fight For His Release Continues

18 novembre 2024 à 12:13

Today marks prominent British-Egyptian coder, blogger, activist, and political prisoner Alaa Abd El Fattah’s 43rd birthday—his eleventh behind bars. Alaa should have been released on September 29, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. Since September 29, Alaa’s mother, mathematician Leila Soueif, has been on hunger strike, while she and the rest of his family have worked to engage the British government in securing Alaa’s release.

Last November, an international counsel team acting on behalf of Alaa’s family filed an urgent appeal to the UN Working Group on Arbitrary Detention. EFF joined 33 other organizations in supporting the submission and urging the UNWGAD promptly to issue its opinion on the matter. Last week, we signed another letter urging the UNWGAD once again to issue an opinion.

Despite his ongoing incarceration, Alaa’s writing and his activism have continued to be honored worldwide. In October, he was announced as the joint winner of the PEN Pinter Prize alongside celebrated writer Arundhati Roy. His 2021 collection of essays, You Have Not Yet Been Defeated, has been re-released as part of Fitzcarraldo Editions’ First Decade Collection. Alaa is also the 2023 winner of PEN Canada’s One Humanity Award and the 2022 winner of EFF’s own EFF Award for Democratic Reform Advocacy.

EFF once again calls for Alaa Abd El Fattah’s immediate and unconditional release and urges the UN Working Group on Arbitrary Detention to promptly issue its opinion on his incarceration. We further urge the British government to take action to secure his release.

The UK Must Act: Alaa Abd El-Fattah Still Imprisoned 25 Days After Release Date

23 octobre 2024 à 13:30

It’s been 25 days since September 29, the day that should have seen British-Egyptian blogger, coder, and activist Alaa Abd El Fattah walk free. Egyptian authorities refused to release him at the end of his sentence, in contradiction of the country's own Criminal Procedure Code, which requires that time served in pretrial detention count toward a prison sentence. In the days since, Alaa’s family has been able to secure meetings with high-level British officials, including Foreign Secretary David Lammy, but as of yet, the Egyptian government still has not released Alaa.

In early October, Alaa was named the 2024 PEN Writer of Courage by PEN Pinter Prize winner Arundhati Roy, who presented the award in a ceremony where it was received by Egyptian publication Mada Masr editor Lina Attalah on Alaa’s behalf.

Alaa’s mother, Laila Soueif, is now on her third week of hunger strike and says that she won’t stop until Alaa is free or she’s taken to the hospital. In recent weeks, Alaa’s mothers and sisters have met with several members of Parliament in the hopes of placing more pressure on officials. As the BBC reports, his family are “deeply disappointed with how the current government, and the previous one, have handled his case” and believe that the UK has more leverage with Egypt that it is not using.

Alaa deserves to finally return to his family, now in the UK, and to be reunited with his son, Khaled, who is now a teenager. We urge EFF supporters in the UK to write to their MP (external link) to place pressure on the UK’s Labour government to use their power to push for Alaa’s release. 

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

15 octobre 2024 à 15:48

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

Par : Karen Gullo
10 octobre 2024 à 14:20

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

The X Corp. Shutdown in Brazil: What We Can Learn

8 octobre 2024 à 12:39

Update (10/8/2024): Brazil lifted a ban on the X Corp. social media platform today after the country's Supreme Court said the company had complied with all of its orders. Regulators have 24 hours to reinstate the platform, though it could take longer for it to come back online.

The feud between X Corp. and Brazil’s Supreme Court continues to drag on: After a month-long standoff, X Corp. folded and complied with court orders to suspend several accounts, name a legal representative in Brazil, and pay 28.6 million reais ($5.24 million) in fines. That hasn’t cleared the matter up, though.

The Court says X paid the wrong bank, which X denies. Justice Alexandre de Moraes has asked that the funds be redirected to the correct bank and for Brazil’s prosecutor general to weigh in on X’s requests to be reinstated in Brazil.

So the drama continues, as does the collateral damage to millions of Brazilian users who rely on X Corp. to share information and expression. While we watch it unfold, it’s not too early to draw some important lessons for the future.

Let’s break it down.

How We Got Here

The Players

Unlike courts in many countries, the Brazilian Supreme Court has the power to conduct its own investigations in limited circumstances, and issue orders based on its findings. Justice Moraes has drawn on this power frequently in the past few years to target what he called “digital militias,” anti-democratic acts, and fake news. Many in Brazil believe that these investigations, combined with other police work, have helped rein in genuinely dangerous online activities and protect the survival of Brazil’s democratic processes, particularly in the aftermath of January 2023 riots.

At the same time, Moraes’ actions have raised concerns about judicial overreach. For instance, his work is less than transparent. And the resulting content blocking orders more often than not demand suspension of entire accounts, rather than specific posts. Other leaked orders include broad requests for subscriber information of people who used a specific hashtag.

X Corp.’s controversial CEO, Elon Musk has publicly criticized the blocking orders. And while he may be motivated by concern for online expression, it is difficult to untangle that motivation from his personal support for the far-right causes Moraes and others believe threaten democracy in Brazil.

The Standoff

In August, as part of an investigation into coordinated actions to spread disinformation and destabilize Brazilian democracy, Moraes ordered X Corp. to suspend accounts that were allegedly used to intimidate and expose law enforcement officers. Musk refused, directly contradicting his past statements that X Corp. “can’t go beyond the laws of a country”—a stance that supposedly justified complying with controversial orders to block accounts and posts in Turkey and India.

After Moraes gave X Corp. 24 hours to fulfill the order or face fines and the arrest of one of its lawyers, Musk closed down the company’s operations in Brazil altogether. Moraes then ordered Brazilian ISPs to block the platform until Musk designated a legal representative. And people who used tools such as VPNs to circumvent the block can be fined 50,000 reais (approximately $ 9,000 USD) per day.

These orders remain in place unless or until pending legal challenges succeed. Justice Moraes has also authorized Brazil’s Federal Police to monitor “extreme cases” of X Corp. use. It’s unclear what qualifies as an “extreme case,” or how far the police may take that monitoring authority. Flagged users must be notified that X Corp. has been blocked in Brazil; if they continue to use it via VPNs or other means, they are on the hook for substantial daily fines.

A Bridge Too Far

Moraes’ ISP blocking order, combined with the user fines, has been understandably controversial. International freedom of expression standards treat these kinds of orders as extreme measures, permissible only in exceptional circumstances where provided by law and in accordance with necessary and proportionate principles. Justice Moraes said the blocking was necessary given upcoming elections and the risk that X Corp. would ignore future orders and allow the spread of disinformation.

But it has also meant that millions of Brazilians cannot access a platform that, for them, is a valuable source of information. Indeed, restrictions on accessing X Corp. ended up creating hurdles to understanding and countering electoral disinformation. The Brazilian Association of Newspapers has argued the restrictions adversely impact journalism. At the same time, online electoral disinformation holds steady on other platforms (while possibly at a slower pace).

Moreover, now that X Corp. has bowed to his demands, Moraes’ concerns that the company cannot be trusted to comply with Brazilian law are harder to justify. In any event, there are far more balanced options now to deal with the remaining fines that don’t create collateral damage to millions of users.

What Comes Next: Concerns and Open Questions

There are several structural issues that have helped fuel the conflict and exacerbated its negative effects. First, the mechanisms for legal review of Moraes’ orders are unclear and/or ineffective. The Supreme Court has previously held that X Corp. itself cannot challenge suspension of user accounts, thwarting a legal avenue for platforms to defend their users’ speech—even where they may be the only entities that even know about the order before accounts are shut down.

A Brazilian political party and the Federal Council of the Brazilian Bar Association filed legal challenges to the blocking order and user fines, respectively, but it is likely that courts will find these challenges procedurally improper as well.

Back in 2016, a single Supreme Court Justice held back a wave of blocking orders targeting WhatsApp. Eight years later, a single Justice may have created a new precedent in the opposite direction—with little or no means to appeal it.

Second, this case highlights what can happen when too much power is held by just a few people or institutions. On the one hand, in Brazil as elsewhere, a handful of wealthy corporations wield enormous power over online expression. Here, that problem is exacerbated by Elon Musk’s control of Starlink, an important satellite internet provider in Brazil.

On the other hand, the Supreme Court also has tremendous power. Although the court’s actions may have played an important role in preserving Brazilian democracy in recent years, powers that are not properly subject to public oversight or meaningful challenge invite overreach.

All of which speaks to a need for better transparency (in both the public and private sectors) and real checks and balances. Independent observers note that, despite challenges, Brazil has already improved its democratic processes. Strengthening this path includes preventing judicial overreach.

As for social media platforms, the best way to stave off future threats to online expression may be to promote more alternatives, so no single powerful person, whether a judge, a billionaire, or even a president, can dramatically restrict online expression with the stroke of a pen.

 

 

 

 

Los llamamientos para suprimir la Ley de Ciberdelincuencia de Jordania se hacen eco de los llamamientos para rechazar el Tratado sobre Ciberdelincuencia

In a number of countries around the world, communities—and particularly those that are already vulnerable—are threatened by expansive cybercrime and surveillance legislation. One of those countries is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

We’ve criticized this law before, noting how it was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. It broadly criminalizes online content labeled as “pornographic” or deemed to “expose public morals,” and prohibits the use of Virtual Private Networks (VPNs) and other proxies. Now, EFF has joined thirteen digital rights and free expression organizations in calling once again for Jordan to scrap the controversial cybercrime law.

The open letter, organized by Article 19, calls upon Jordanian authorities to cease use of the cybercrime law to target and punish dissenting voices and stop the crackdown on freedom of expression. The letter also reads: “We also urge the new Parliament to repeal or substantially amend the Cybercrime Law and any other laws that violate the right to freedom of expression and bring them in line with international human rights law.”

Jordan’s law is a troubling example of how overbroad cybercrime legislation can be misused to target marginalized communities and suppress dissent. This is the type of legislation that the U.N. General Assembly has expressed concern about, including in 2019 and 2021, when it warned against cybercrime laws being used to target human rights defenders. These concerns are echoed by years of reports from U.N. human rights experts on how abusive cybercrime laws facilitate human rights abuses.

The U.N. Cybercrime Treaty also poses serious threats to free expression. Far from protecting against cybercrime, this treaty risks becoming a vehicle for repressive cross-border surveillance practices. By allowing broad international cooperation in surveillance for any crime 'serious' under national laws—defined as offenses punishable by at least four years of imprisonment—and without robust mandatory safeguards or detailed operational requirements to ensure “no suppression” of expression, the treaty risks being exploited by government to suppress dissent and target marginalized communities, as seen with Jordan’s overbroad 2023 cybercrime law. The fate of the U.N. Cybercrime Treaty now lies in the hands of member states, who will decide on its adoption later this year.

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

Britain Must Call for Release of British-Egyptian Activist and Coder Alaa Abd El Fattah

As British-Egyptian coder, blogger, and activist Alaa Abd El Fattah enters his fifth year in a maximum security prison outside Cairo, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa, we stand with his family and an ever-growing international coalition of supporters in calling for his release.

Alaa has over these five years endured beatings and solitary confinement. His family at times were denied visits or any contact with him. He went on a seven-month hunger strike in protest of his incarceration, and his family feared that he might not make it.

But global attention on his plight, bolstered by support from British officials in recent years, ultimately led to improved prison conditions and family visitation rights.

But let’s be clear: Egypt’s long-running retaliation against Alaa for his activism is a travesty and an arbitrary use of its draconian, anti-speech laws. He has spent the better part of the last 10 years in prison. He has been investigated and imprisoned under every Egyptian regime that has served in his lifetime. The time is long overdue for him to be freed.

Over 20 years ago Alaa began using his technical skills to connect coders and technologists in the Middle East to build online communities where people could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s successive repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights.

Alaa is a symbol for the principle of free speech in a region of the world where speaking out for justice and human rights is dangerous and using the power of technology to build community is criminalized. But he has also come to symbolize the oppression and cruelty with which the Egyptian government treats those who dare to speak out against authoritarianism and surveillance.

Egyptian authorities’ relentless, politically motivated pursuit of Alaa is an egregious display of abusive police power and lack of due process. He was first arrested and detained in 2006 for participating in a demonstration. He was arrested again in 2011 on charges related to another protest. In 2013 he was arrested and detained on charges of organizing a protest. He was eventually released in 2014, but imprisoned again after a judge found him guilty in absentia.

What diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family? - David Lammy

That same year he was released on bail, only to be re-arrested when he went to court to appeal his case. In 2015 he was sentenced to five years in prison and released in 2019. But he was re-arrested in a massive sweep of activists in Egypt while on probation and charged with spreading false news and belonging to a terrorist organization for sharing a Facebook post about human rights violations in prison. He was sentenced in 2021, after being held in pre-trial detention for more than two years, to five years in prison. September 29 will mark five years that he has spent behind bars.

While he’s been in prison an anthology of his writing, which was translated into English by anonymous supporters, was published in 2021 as You Have Not Yet Been Defeated, and he became a British citizen through his mother, the rights activist and mathematician Laila Soueif, that December.

Protesting his conditions, Alaa shaved his head and went on hunger strike beginning in April 2022. As he neared the third month of his hunger strike, former UK foreign secretary Liz Truss said she was working hard to secure his release. Similarly, then-PM Rishi Sunak wrote in a letter to Alaa’s sister, Sanaa Seif, that “the government is deeply committed to doing everything we can to resolve Alaa's case as soon as possible."

David Lammy, then a Member of Parliament and now Britain’s foreign secretary, asked Parliament in November 2022, “what diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family?” Lammy joined Alaa’s family during a sit-in outside of the Foreign Office.

When the UK government’s promises failed to come to fruition, Alaa escalated his hunger strike in the runup to the COP27 gathering. At the same time, a coordinated campaign led by his family and supported by a number of international organizations helped draw global attention to his plight, and ultimately led to improved prison conditions and family visitation rights.

But although Alaa’s conditions have improved and his family visitation rights have been secured, he remains wrongfully imprisoned, and his family fears that the Egyptian government has no intention of releasing him.

With Lammy, now UK Foreign Minister, and a new Labour government in place in the UK, there is renewed hope for Alaa’s release. Keir Starmer, Labour Leader and the new prime minister, has voiced his support for Fattah’s release.

The new government must make good on its pledge to defend British values and interests, and advocate for the release of its British citizen Alaa Fattah. We encourage British citizens to write to their MP (external link) and advocate for his release. His continued detention is debased. Egypt should face the sole of shoes around the world until Fattah is freed.

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

Par : Karen Gullo
1 août 2024 à 10:08

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime”
has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
31 juillet 2024 à 10:53

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For
an in-depth analysis, please read further.

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

Par : Karen Gullo
30 juillet 2024 à 18:44

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized
our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in
comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The
statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries,
said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Weak Human Rights Protections: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
30 juillet 2024 à 08:58

The proposed UN Cybercrime Convention dangerously undermines human rights, opening the door to unchecked cross-border surveillance and government overreach. Despite two and a half years of negotiations, the draft treaty authorizes extensive surveillance powers without robust safeguards, omitting essential data protection principles.

This risks turning international efforts to fight cybercrime into tools for human rights abuses and transnational repression.

Safeguards like prior judicial authorization call for a judge's approval of surveillance before it happens, ensuring the measure is legitimate, necessary and proportionate. Notifying individuals when their data is being accessed gives them an opportunity to challenge requests that they believe are disproportionate or unjustified.

Additionally, requiring states to publish statistical transparency reports can provide a clear overview of surveillance activities. These safeguards are not just legal formalities; they are vital for upholding the integrity and legitimacy of law enforcement activities in a democratic society.¸

Unfortunately the draft treaty is severely lacking in these protections. An article in the current draft about conditions and safeguards is vaguely written,
permitting countries to apply safeguards only "where appropriate," and making them dependent on States domestic laws, some of which have weak human rights protections.¸This means that the level of protection against abusive surveillance and data collection can vary widely based on each country's discretion.

Extensive surveillance powers must be reined in and strong human rights protections added. Without those changes, the proposed treaty unacceptably endangers human rights around the world and should not be approved.

Check out our
two detailed analyses about the lack of human rights safeguards in the draft treaty. 

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

Par : Karen Gullo
24 juillet 2024 à 22:06

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

play
Privacy info. This embed will serve content from youtube.com

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:

  1. Overly Broad Scope and Legal Uncertainty:

  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:

  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:

  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:

  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:

  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.

Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.

Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

The Global Suppression of Online LGBTQ+ Speech Continues

A global increase in anti-LGBTQ+ intolerance is having a significant impact on digital rights. As we wrote last year, censorship of LGBTQ+ websites and online content is on the rise. For many LGBTQ+ individuals the world over, the internet can be a safer space for exploring identity, finding community, and seeking support. But with anti-LGBTQ+ bills restricting free expression and privacy to content moderation decisions that disproportionately impact LGBTQ+ users, digital spaces that used to seem like safe havens are, for many, no longer so.

EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world, and that includes LGBTQ+ communities, which all too often face threats, censorship, and other risks when they go online. This Pride month—and the rest of the year—we’re highlighting some of those risks, and what we’re doing to help change online spaces for the better.

Worsening threats in the Americas

In the United States, where EFF is headquartered, recent gains in rights have been followed by an uptick in intolerance that has led to legislative efforts, mostly at the state level. In 2024 alone, 523 anti-LGBTQ+ bills have been proposed by state legislatures, many of which restrict freedom of expression. In addition to these bills, a drive in mostly conservative areas to ban books in school libraries—many of which contain LGBTQ themes—is creating an environment in which queer youth feel even more marginalized.

At the national level, an effort to protect children from online harms—the Kids Online Safety Act (KOSA)—risks alienating young people, particularly those from marginalized communities, by restricting their access to certain content on social media. EFF spoke with young people about KOSA, and found that many are concerned that they will lose access to help, education, friendship, and a sense of belonging that they have found online. At a time when many young people have just come out of several years of isolation during the pandemic and reliance on online communities for support, restricting their access could have devastating consequences.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Similarly, age-verification bills being put forth by state legislatures often seek to prevent access to material deemed harmful to minors. If passed, these measures would restrict access to vital content, including education and resources that LGBTQ+ youth without local support often rely upon. These bills often contain vague and subjective definitions of “harm” and are all too often another strategy in the broader attack on free expression that includes book bans, censorship of reproductive health information, and attacks on LGBTQ+ youth.

Moving south of the border, in much of South and Central America, legal progress has been made with respect to rights, but violence against LGBTQ+ people is particularly high, and that violence often has online elements to it. In the Caribbean, where a number of countries have strict anti-LGBTQ+ laws on the books often stepping from the colonial era, online spaces can be risky and those who express their identities in them often face bullying and doxxing, which can lead to physical harm.

In many other places throughout the world, the situation is even worse. While LGBTQ+ rights have progressed considerably over the past decade in a number of democracies, the sense of freedom and ease that these hard-won freedoms created for many are suffering serious setbacks. And in more authoritarian countries where the internet may have once been a lifeline, crackdowns on expression have coincided with increases in user growth and often explicitly target LGBTQ+ speech.

In Europe, anti-LGBTQ+ violence at a record high

In recent years, legislative efforts aimed at curtailing LGBTQ+ rights have gained momentum in several European countries, largely the result of a rise in right-wing populism and conservatism. In Hungary, for instance, the Orban government has enacted laws that restrict LGBTQ+ rights under the guise of protecting children. In 2021, the country passed a law banning the portrayal or promotion of LGBTQ+ content to minors. In response, the European Commission launched legal cases against Hungary—as well as some regions in Poland—over LGBTQ+ discrimination, with Commission President Ursula von der Leyen labeling the law as "a shame" and asserting that it clearly discriminates against people based on their sexual orientation, contravening the EU's core values of equality and human dignity​.

In Russia, the government has implemented severe restrictions on LGBTQ+ content online. A law initially passed in 2013 banning the promotion of “non-traditional sexual relations” among minors was expanded in 2022 to apply to individuals of all ages, further criminalizing LGBTQ+ content. The law prohibits the mention or display of LGBTQ+ relationships in advertising, books, media, films, and on online platforms, and has created a hostile online environment. Media outlets that break the law can be fined or shut down by the government, while foreigners who break the law can be expelled from the country. 

Among the first victims of the amended law were seven migrant sex workers—all trans women—from Central Asia who were fined and deported in 2023 after they published their profiles on a dating website. Also in 2023, six online streaming platforms were penalised for airing movies with LGBTQ-related scenes. The films included “Bridget Jones: The Edge of Reason”, “Green Book”, and the Italian film “Perfect Strangers.”

Across the continent, as anti-LGBTQ+ violence is at a record high, queer communities are often the target of online threats. A 2022 report by the European Digital Media Observatory reported a significant increase in online disinformation campaigns targeting LGBTQ+ communities, which often frame them as threats to traditional family values. 

Across Africa, LGBTQ+ rights under threat

In 30 of the 54 countries on the African continent, homosexuality is prohibited. Nevertheless, there is a growing movement to decriminalize LGBTQ+ identities and push toward achieving greater rights and equality. As in many places, the internet often serves as a safer space for community and organizing, and has therefore become a target for governments seeking to crack down on LGBTQ+ people.

In Tanzania, for instance, where consensual same-sex acts are prohibited under the country’s colonial-era Penal Code, authorities have increased digital censorship against LGBTQ+ content, blocking websites and social media platforms that provide support and information to the LGBTQ+ community .This crackdown is making it increasingly difficult for people to find safe spaces online. As a result of these restrictions, many online groups used by the LGBTQ+ community for networking and support have been forced to disband, driving individuals to riskier public spaces to meet and socialize​. 

In other countries across the continent, officials are weaponizing legal systems to crack down on LGBTQ+ people and their expression. According to Access Now, a proposed law in Kenya, the Family Protection Bill, seeks to ban a variety of actions, including public displays of affection, engagement in activities that seek to change public opinion on LGBTQ+ issues, and the use of the internet, media, social media platforms, and electronic devices to “promote homosexuality.” Furthermore, the prohibited acts would fall under the country’s Computer Misuse and Cybercrimes Act of 2018, giving law enforcement the power to monitor and intercept private communications during investigations, as provided by Section 36 of the National Intelligence Service Act, 2012. 

A draconian law passed in Uganda in 2023, the Anti-Homosexuality Act, introduced capital punishment for certain acts, while allowing for life imprisonment for others. The law further imposes a 20-year prison sentence for people convicted of “promoting homosexuality,” which includes the publication of LGBTQ+ content, as well as “the use of electronic devices such as the internet, mobile phones or films for the purpose of homosexuality or promoting homosexuality.”

In Ghana, if passed, the anti-LGBTQ+ Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill would introduce prison sentences for those who engage in LGBTQ+ sexual acts as well as those who promote LGBTQ+ rights. As we’ve previously written, ban all speech and activity on and offline that even remotely supports LGBTQ+ rights. Though the bill passed through parliament in March, he won’t sign the bill until the country’s Supreme Court rules on its constitutionality.

And in Egypt and Tunisia, authorities have integrated technology into their policing of LGBTQ+ people, according to a 2023 Human Rights Watch report. In Tunisia, where homosexuality is punishable by up to three years in prison, online harassment and doxxing are common, threatening the safety of LGBTQ+ individuals. Human Rights Watch has documented cases in which social media users, including alleged police officers, have publicly harassed activists, resulting in offline harm.

Egyptian security forces often monitor online LGBTQ+ activity and have used social media platforms as well as Grindr to target and arrest individuals. Although same-sex relations are not explicitly banned by law in the country, authorities use various morality provisions to effectively criminalize homosexual relations. More recently, prosecutors have utilized cybercrime and online morality laws to pursue harsher sentences.

In Asia, Cybercrime laws threaten expression

LGBTQ+ rights in Asia vary widely. While homosexual relations are legal in a majority of countries, they are strictly banned in twenty, and same-sex marriage is only legal in three—Taiwan, Nepal, and Thailand. Online threats are also varied, ranging from harassment and self-censorship to the censoring of LGBTQ+ content—such as in Indonesia, Iran, China, Saudi Arabia, the UAE, and Malaysia, among other nations—as well as legal restrictions with often harsh penalties.

The use of cybercrime provisions to target LGBTQ+ expression is on the rise in a number of countries, particularly in the MENA region. In Jordan, the Cybercrime Law of 2023, passed last August, imposes restrictions on freedom of expression, particularly for LGBTQ+ individuals. Articles 13 and 14 of the law impose penalties for producing, distributing, or consuming “pornographic activities or works” and for using information networks to “facilitate, promote, incite, assist, or exhort prostitution and debauchery, or seduce another person, or expose public morals.” Jordan follows in the footsteps of neighboring Egypt, which instituted a similar law in 2018.

The LGBTQ+ movement in Bangladesh is impacted by the Cyber Security Act, quietly passed in 2023. Several provisions of the Act can be used to target LGBTQ+ sites; Section 8 enables the government to shut down websites, while section 42 grants law enforcement agencies the power to search and seize a person’s hardware, social media accounts, and documents, both online and offline, without a warrant. And section 25 criminalizes published contents that tarnish the image or reputation of the country.

The online struggle is global

In addition to national-level restrictions, LGBTQ+ individuals often face content suppression on social media platforms. While some of this occurs as the result of government requests, much of it is actually due to platforms’ own policies and practices. A recent GLAAD case study points to specific instances where content promoting or discussing LGBTQ+ issues is disproportionately flagged and removed, compared to non-LGBTQ+ content. The GLAAD Social Media Safety Index also provides numerous examples where platforms inconsistently enforce their policies. For instance, posts that feature LGBTQ+ couples or transgender individuals are sometimes taken down for alleged policy violations, while similar content featuring heterosexual or cisgender individuals remains untouched. This inconsistency suggests a bias in content moderation that EFF has previously documented and leads to the erasure of LGBTQ+ voices in online spaces.

Likewise, the community now faces threats at the global level, in the form of the impending UN Cybercrime Convention, currently in negotiations. As we’ve written, the Convention would expand cross-border surveillance powers, enabling nations to potentially exploit these powers to probe acts they controversially label as crimes based on subjective moral judgements rather than universal standards. This could jeopardize vulnerable groups, including the LGBTQ+ community.

EFF is pushing back to ensure that the Cybercrime Treaty's scope must be narrow, and human rights safeguards must be a priority. You can read our written and oral interventions and follow our Deeplinks Blog for updates. Earlier this year, along with Access Now, we also submitted comment to the U.N. Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) to inform the Independent Expert’s thematic report presented to the U.N. Human Rights Council at its fifty-sixth session.

But just as the struggle for LGBTQ+ rights and recognition is global, so too is the struggle for a safer and freer internet. EFF works year round to highlight that struggle and to ensure LGBTQ+ rights are protected online. We collaborate with allies around the world, and work to ensure that both states and companies protect and respect the rights of LGBTQ+ communities worldwide.

We also want to help LGBTQ+ communities stay safer online. As part of our Surveillance Self-Defense project, we offer a number of guides for safer online communications, including a guide specifically for LGBTQ+ youth.

EFF believes in preserving an internet that is free for everyone. While there are numerous harms online as in the offline world, digital spaces are often a lifeline for queer youth, particularly those living in repressive environments. The freedom of discovery, the sense of community, and the access to information that the internet has provided for so many over the years must be preserved. 



If Not Amended, States Must Reject the Flawed Draft UN Cybercrime Convention Criminalizing Security Research and Certain Journalism Activities

This is the first post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

The latest and nearly final version of the proposed UN Cybercrime Convention—dated May 23, 2024 but released today June 14—leaves security researchers’ and investigative journalists’ rights perilously unprotected, despite EFF’s repeated warnings.

The world benefits from people who help us understand how technology works and how it can go wrong. Security researchers, whether independently or within academia or the private sector, perform this important role of safeguarding information technology systems. Relying on the freedom to analyze, test, and discuss IT systems, researchers identify vulnerabilities that can cause major harms if left unchecked. Similarly, investigative journalists and whistleblowers play a crucial role in uncovering and reporting on matters of significant public interest including corruption, misconduct, and systemic vulnerabilities, often at great personal risk.

For decades, EFF has fought for security researchers and journalists, provided legal advice to help them navigate murky criminal laws, and advocated for their right to conduct security research without fear of legal repercussions. We’ve helped researchers when they’ve faced threats for performing or publishing their research, including identifying and disclosing critical vulnerabilities in systems. We’ve seen how vague and overbroad laws on unauthorized access have chilled good-faith security research, threatening those who are trying to keep us safe or report on public interest topics. 

Now, just as some governments have individually finally recognized the importance of protecting security researchers’ work, many of the UN convention’s criminalization provisions threaten to spread antiquated and ambiguous language around the world with no meaningful protections for researchers or journalists. If these and other issues are not addressed, the convention poses a global threat to cybersecurity and press freedom, and UN Member States must reject it.

This post will focus on one critical aspect of coders’ rights under the newest released text: the provisions that jeopardize the work of security researchers and investigative journalists. In subsequent posts, Wwe will delve into other aspects of the convention in later posts.

How the Convention Fails to Protect Security Research and Reporting on Public Interest Matters

What Provisions Are We Discussing?

Articles 7 to 11 of the Criminalization Chapter—covering illegal access, illegal interception, interference with electronic data, interference with ICT systems, and misuse of devices—are core cybercrimes of which security researchers often have been accused of such offenses as a result of their work. (In previous drafts of the convention, these were articles 6-10).

  • Illegal Access (Article 7): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization to identify vulnerabilities.
  • Illegal Interception (Article 8): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require malicious criminal intent (mens rea).
  • Interference with Data (Article 9) and Interference with Computer Systems (Article 10): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks that could be described as “interference” even though they don’t cause harm and are performed without criminal malicious intent.

All of these articles fail to include a mandatory element of criminal intent to cause harm, steal, or defraud. A requirement that the activity cause serious harm is also absent from Article 10 and optional in Article 9. These safeguards must be mandatory.

What We Told the UN Drafters of the Convention in Our Letter?

Earlier this year, EFF submitted a detailed letter to the drafters of the UN Cybercrime Convention on behalf of 124 signatories, outlining essential protections for coders. 

Our recommendations included defining unauthorized access to include only those accesses that bypass security measures, and only where such security measures count as effective. The convention’s existing language harks back to cases where people were criminally prosecuted just for editing part of a URL.

We also recommended ensuring that criminalization of actions requires clear malicious or dishonest intent to harm, steal, or infect with malware. And we recommended explicitly exempting good-faith security research and investigative journalism on issues of public interest from criminal liability.

What Has Already Been Approved?

Several provisions of the UN Cybercrime Convention have been approved ad referendum. These include both complete articles and specific paragraphs, indicating varying levels of consensus among the drafters.

Which Articles Has Been Agreed in Full

The following articles have been agreed in full ad referendum, meaning the entire content of these articles has been approved:

    • Article 9: Interference with Electronic Data
    • Article 10: Interference with ICT Systems
    • Article 11: Misuse of Devices 
    • Article 28(4): Search and Seizure Assistance Mandate

We are frustrated to see, for example, that Article 11 (misuse of devices) has been accepted without any modification, and so continues to threaten the development and use of cybersecurity tools. Although it criminalizes creating or obtaining these tools only for purposes of violations of other crimes defined in Articles 7-10 (covering illegal access, illegal interception, interference with electronic data, and interference with ICT systems), those other articles lack mandatory criminal intent requirements and a requirement to define “without right” as bypassing an effective security measure. Because those articles do not specifically exempt activities such as security testing, Article 11 may inadvertently criminalize security research and investigative journalism. It may punish even making or using tools for research purposes if the research, such as security testing, is considered to fall under one of the other crimes.

We are also disappointed that Article 28(4) has also been approved ad referendum. This article could disproportionately empower authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. As we have written before, this provision can be abused to force security experts, software engineers, tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees—under threat of criminal prosecution—to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with data requests to the extent of their abilities.

Which Provisions Has Been Partially Approved?

The broad prohibitions against unauthorized access and interception have already been approved ad referendum, which means:

  • Article 7: Illegal Access (first paragraph agreed ad referendum)
  • Article 8: Illegal Interception (first paragraph agreed ad referendum)

The first paragraph of each of these articles includes language requiring countries to criminalize accessing systems or data or intercepting “without right.” This means that if someone intentionally gets into a computer or network without authorization, or performs one of the other actions called out in subsequent articles, it should be considered a criminal offense in that country. The additional optional requirements, however, are crucial for protecting the work of security researchers and journalists, and are still on the negotiating table and worth fighting for.  

What Has Not Been Agreed Upon Yet?

There is no agreement yet on Paragraph 2 of Article 7 on Illegal Access and Article 8 on illegal interception, which give countries the option to add specific requirements that can vary from article to article. Such safeguards could provide necessary clarifications to prevent criminalization of legal activities and ensure that laws are not misapplied to stifle research, innovation, and reporting on public interest matters. We made clear throughout this negotiation process that these conditions are a crucially important part of all domestic legislation pursuant to the convention. We’re disappointed to see that states have failed to act on any of our recommendations, including the letter we sent in February.

The final text dated May 23, 2024 of the convention is conspicuously silent on several crucial protections for security researchers:

  • There are no explicit exemptions for security researchers or investigative journalists who act in good faith.
  • The requirement for malicious intent remains optional rather than mandatory, leaving room for broad and potentially abusive interpretations.
  • The text does not specify that bypassing security measures should only be considered unauthorized if those measures are effective, nor make that safeguard mandatory.

How Has Similar Phrasing Caused Problems in the Past?

There is a history of overbroad interpretation under laws such as the United States’ Computer Fraud and Abuse Act, and this remains a significant concern with similarly vague language in other jurisdictions. This can also raise concerns well beyond researchers’ and journalists’ work, as when such legislation is invoked by one company to hinder a competitor’s ability to access online systems or create interoperable technologies. EFF’s paper, “Protecting Security Researchers' Rights in the Americas,” has documented numerous instances in which security researchers faced legal threats for their work:

  • MBTA v. Anderson (2008): The Massachusetts Bay Transit Authority (MBTA) used a  cybercrime law to sue three college students who were planning to give a presentation about vulnerabilities in Boston’s subway fare system.
  • Canadian security researcher (2018): A 19-year-old Canadian was accused of unauthorized use of a computer service for downloading public records from a government website.
  • LinkedIn’s cease and desist letter to hiQ Labs, Inc. (2017): LinkedIn invoked cybercrime law against hiQ Labs for “scraping” — accessing publicly available information on LinkedIn’s website using automated tools. Questions and cases related to this topic have continued to arise, although an appeals court ultimately held that scraping public websites does not violate the CFAA. 
  • Canadian security researcher (2014): A security researcher demonstrated a widely known vulnerability that could be used against Canadians filing their taxes. This was acknowledged by the tax authorities and resulted in a delayed tax filing deadline. Although the researcher claimed to have had only positive intentions, he was charged with a cybercrime.
  • Argentina’s prosecution of Joaquín Sorianello (2015): Software developer Joaquín Sorianello uncovered a vulnerability in election systems and faced criminal prosecution for demonstrating this vulnerability, even though the government concluded that he did not intend to harm the systems and did not cause any serious damage to them.

These examples highlight the chilling effect that vague legal provisions can have on the cybersecurity community, deterring valuable research and leaving critical vulnerabilities unaddressed.

Conclusion

The latest draft of the UN Cybercrime Convention represents a tremendous failure to protect coders’ rights. By ignoring essential recommendations and keeping problematic language, the convention risks stifling innovation and undermining cybersecurity. Delegates must push for urgent revisions to safeguard coders’ rightsandrights and ensure that the convention fosters, rather than hinders, the development of a secure digital environment. We are running out of time; action is needed now.

Stay tuned for our next post, in which we will explore other critical areas affected by the proposed convention including its scope and human rights safeguards. 

NETMundial+10 Multistakeholder Statement Pushes for Greater Inclusiveness in Internet Governance Processes

Par : Karen Gullo
23 mai 2024 à 17:55

A new statement about strengthening internet governance processes emerged from the NETMundial +10 meeting in Brazil last month, strongly reaffirming the value of and need for a multistakeholder approach involving full and balanced participation of all parties affected by the internet—from users, governments, and private companies to civil society, technologists, and academics.

But the statement did more than reiterate commitments to more inclusive and fair governance processes. It offered recommendations and guidelines that, if implemented, can strengthen multistakeholder principles as the basis for global consensus-building and democratic governance, including in existing multilateral internet policymaking efforts.


The event and statement, to which EFF contributed with dialogue and recommendations, is a follow-up to the 2014 NETMundial meeting, which ambitiously sought to consolidate multistakeholder processes to internet governance and recommended
10 process principles. It’s fair to say that over the last decade, it’s been an uphill battle turning words into action.

Achieving truly fair and inclusive multistakeholder processes for internet governance and digital policy continues to face many hurdles.  Governments, intergovernmental organizations, international standards bodies, and large companies have continued to wield their resources and power. Civil society
  organizations, user groups, and vulnerable communities are too often sidelined or permitted only token participation.

Governments often tout multistakeholder participation, but in practice, it is a complex task to achieve. The current Ad Hoc Committee negotiations of the proposed
UN Cybercrime Treaty highlight the complexity and controversy of multistakeholder efforts. Although the treaty negotiation process was open to civil society and other nongovernmental organizations (NGOs), with positive steps like tracking changes to amendments, most real negotiations occur informally, excluding NGOs, behind closed doors.

This reality presents a stark contrast and practical challenge for truly inclusive multistakeholder participation, as the most important decisions are made without full transparency and broad input. This demonstrates that, despite the appearance of inclusivity, substantive negotiations are not open to all stakeholders.

Consensus building is another important multistakeholder goal but faces significant practical challenges because of the human rights divide among states in multilateral processes. For example, in the context of the Ad Hoc Committee, achieving consensus has remained largely unattainable because of stark differences in human rights standards among member States. Mechanisms for resolving conflicts and enabling decision-making should consider human rights laws to indicate redlines. In the UN Cybercrime Treaty negotiations, reaching consensus could potentially lead to a race to the bottom in human rights and privacy protections.

To be sure, seats at the policymaking table must be open to all to ensure fair representation. Multi-stakeholder participation in multilateral processes allows, for example, civil society to advocate for more human rights-compliant outcomes. But while inclusivity and legitimacy are essential, they alone do not validate the outcomes. An open policy process should always be assessed against the specific issue it addresses, as not all issues require global regulation or can be properly addressed in a specific policy or governance venue.

The
NETmundial+10 Multistakeholder Statement, released April 30 following a two-day gathering in São Paulo of 400 registered participants from 60 countries, addresses issues that have prevented stakeholders, especially the less powerful, from meaningful participation, and puts forth guidelines aimed at making internet governance processes more inclusive and accessible to diverse organizations and participants from diverse regions.

For example, the 18-page statement contains recommendations on how to strengthen inclusive and diverse participation in multilateral processes, which includes State-level policy making and international treaty negotiations. Such guidelines can benefit civil society participation in, for example, the UN Cybercrime Treaty negotiations. EFF’s work with international allies in the UN negotiating process is outlined here.

The NETmundial statement takes asymmetries of power head on, recommending that governance processes provide stakeholders with information and resources and offer capacity-building to make these processes more accessible to those from developing countries and underrepresented communities. It sets more concrete guidelines and process steps for multistakeholder collaboration, consensus-building, and decision-making, which can serve as a roadmap in the internet governance sphere.

The statement also recommends strengthening the UN-convened Internet Governance Forum (IGF), a predominant venue for the frank exchange of ideas and multistakeholder discussions about internet policy issues. The multitude of initiatives and pacts around the world dealing with internet policy can cause duplication, conflicting outcomes, and incompatible guidelines, making it hard for stakeholders, especially those from the Global South, to find their place. 


The IGF could strengthen its coordination and information sharing role and serve as a venue for follow up of multilateral digital policy agreements. The statement also recommended improvements in the dialogue and coordination between global, regional, and national IGFs to establish continuity between them and bring global attention to local perspectives.

We were encouraged to see the statement recommend that IGF’s process for selecting its host country be transparent and inclusive and take into account human rights practices to create equitable conditions for attendance.

EFF and 45 digital and human rights organizations last year called on the UN Secretary-General and other decision-makers to reverse their decision to grant host status for the 2024 IGF to Saudi Arabia, which has a long history of human rights violations, including the persecution of human and women’s rights defenders, journalists, and online activists. Saudi Arabia’s draconian cybercrime laws are a threat to the safety of civil society members who might consider attending an event there.  

❌
❌