Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Los llamamientos para suprimir la Ley de Ciberdelincuencia de Jordania se hacen eco de los llamamientos para rechazar el Tratado sobre Ciberdelincuencia

In a number of countries around the world, communities—and particularly those that are already vulnerable—are threatened by expansive cybercrime and surveillance legislation. One of those countries is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

We’ve criticized this law before, noting how it was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. It broadly criminalizes online content labeled as “pornographic” or deemed to “expose public morals,” and prohibits the use of Virtual Private Networks (VPNs) and other proxies. Now, EFF has joined thirteen digital rights and free expression organizations in calling once again for Jordan to scrap the controversial cybercrime law.

The open letter, organized by Article 19, calls upon Jordanian authorities to cease use of the cybercrime law to target and punish dissenting voices and stop the crackdown on freedom of expression. The letter also reads: “We also urge the new Parliament to repeal or substantially amend the Cybercrime Law and any other laws that violate the right to freedom of expression and bring them in line with international human rights law.”

Jordan’s law is a troubling example of how overbroad cybercrime legislation can be misused to target marginalized communities and suppress dissent. This is the type of legislation that the U.N. General Assembly has expressed concern about, including in 2019 and 2021, when it warned against cybercrime laws being used to target human rights defenders. These concerns are echoed by years of reports from U.N. human rights experts on how abusive cybercrime laws facilitate human rights abuses.

The U.N. Cybercrime Treaty also poses serious threats to free expression. Far from protecting against cybercrime, this treaty risks becoming a vehicle for repressive cross-border surveillance practices. By allowing broad international cooperation in surveillance for any crime 'serious' under national laws—defined as offenses punishable by at least four years of imprisonment—and without robust mandatory safeguards or detailed operational requirements to ensure “no suppression” of expression, the treaty risks being exploited by government to suppress dissent and target marginalized communities, as seen with Jordan’s overbroad 2023 cybercrime law. The fate of the U.N. Cybercrime Treaty now lies in the hands of member states, who will decide on its adoption later this year.

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

Par : Karen Gullo
1 août 2024 à 10:08

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime”
has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
31 juillet 2024 à 10:53

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For
an in-depth analysis, please read further.

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

Par : Karen Gullo
30 juillet 2024 à 18:44

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized
our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in
comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The
statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries,
said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Weak Human Rights Protections: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
30 juillet 2024 à 08:58

The proposed UN Cybercrime Convention dangerously undermines human rights, opening the door to unchecked cross-border surveillance and government overreach. Despite two and a half years of negotiations, the draft treaty authorizes extensive surveillance powers without robust safeguards, omitting essential data protection principles.

This risks turning international efforts to fight cybercrime into tools for human rights abuses and transnational repression.

Safeguards like prior judicial authorization call for a judge's approval of surveillance before it happens, ensuring the measure is legitimate, necessary and proportionate. Notifying individuals when their data is being accessed gives them an opportunity to challenge requests that they believe are disproportionate or unjustified.

Additionally, requiring states to publish statistical transparency reports can provide a clear overview of surveillance activities. These safeguards are not just legal formalities; they are vital for upholding the integrity and legitimacy of law enforcement activities in a democratic society.¸

Unfortunately the draft treaty is severely lacking in these protections. An article in the current draft about conditions and safeguards is vaguely written,
permitting countries to apply safeguards only "where appropriate," and making them dependent on States domestic laws, some of which have weak human rights protections.¸This means that the level of protection against abusive surveillance and data collection can vary widely based on each country's discretion.

Extensive surveillance powers must be reined in and strong human rights protections added. Without those changes, the proposed treaty unacceptably endangers human rights around the world and should not be approved.

Check out our
two detailed analyses about the lack of human rights safeguards in the draft treaty. 

Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
29 juillet 2024 à 13:00

International UN treaties aren’t usually on users’ radar. They are debated, often over the course of many years, by diplomats and government functionaries in Vienna or New York, and their significance is often overlooked or lost in the flood of information and news we process every day, even when they expand police powers and threaten the fundamental rights of people all over the world.

Such is the case with the proposed UN Cybercrime Treaty. For more than two years, EFF and its international civil society partners have been deeply involved in spreading the word about, and fighting to fix, seriously dangerous flaws in the draft convention. In the coming days we will publish a series of short posts that cut through the draft’s dense, highly technical text explaining the real-world effects of the convention.

The proposed treaty, pushed by Russia and shepherded by the UN Office on Drugs and Crime, is a proposed agreement between nations purportedly aimed at strengthening cross border investigations and prosecutions of cybercriminals who spread malware, steal data for ransom, and cause data breaches, among other offenses.

The problem is, as currently written, the treaty gives governments massive surveillance and data collection powers to go after not just cybercrime, but any offense they define as a serious that involves the use of a computer or communications system. In some countries, that includes criticizing the government in a social media post, expressing support online for LGBTQ+ rights, or publishing news about protests or massacres.

Tech companies and their overseas staff, under certain treaty provisions, would be compelled to help governments in their pursuit of people’s data, locations, and communications, subject to domestic jurisdictions, many of which establish draconian fines.

We have called the draft convention a blank check for surveillance abuse that can be used as a tool for human rights violations and transnational repression. It’s an international treaty that everyone should know and care about because it threatens the rights and freedoms of people across the globe. Keep an eye out for our posts explaining how.

For our key concerns, read our three-pager:

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

Par : Karen Gullo
24 juillet 2024 à 22:06

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

play
Privacy info. This embed will serve content from youtube.com

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:

  1. Overly Broad Scope and Legal Uncertainty:

  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:

  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:

  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:

  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:

  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.

Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.

Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

This is the second post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out our detailed analysis on the criminalization of security research activities under the proposed convention.

The United Nations Ad Hoc Committee is just weeks away from finalizing a too-broad Cybercrime Draft Convention. This draft would normalize unchecked domestic surveillance and rampant government overreach, allowing serious human rights abuses around the world.

The latest draft of the convention—originally spearheaded by Russia but since then the subject of two and a half years of negotiations—still authorizes broad surveillance powers without robust safeguards and fails to spell out data protection principles essential to prevent government abuse of power.

As the August 9 finalization date approaches, Member States have a last chance to address the convention’s lack of safeguards: prior judicial authorization, transparency, user notification, independent oversight, and data protection principles such as transparency, minimization, notification to users, and purpose limitation. If left as is, it can and will be wielded as a tool for systemic rights violations.

Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards or reject the treaty altogether. These domestic surveillance powers are critical as they underpin international surveillance cooperation.

EFF’s Advocacy for Human Rights Safeguards

EFF has consistently advocated for human rights safeguards to be a baseline for both the criminal procedural measures and international cooperation chapters. The collection and use of digital evidence can implicate human rights, including privacy, free expression, fair trial, and data protection. Strong safeguards are essential to prevent government abuse.

Regrettably, many states already fall short in these regards. In some cases, surveillance laws have been used to justify overly broad practices that disproportionately target individuals or groups based on their political views—particularly ethnic and religious groups. This leads to the suppression of free expression and association, the silencing of dissenting voices, and discriminatory practices. Examples of these abuses include covert surveillance of internet activity without a warrant, using technology to track individuals in public, and monitoring private communications without legal authorization, oversight, or safeguards.

The Special Rapporteur on the rights to freedom of peaceful assembly and of association has already sounded the alarm about the dangers of current surveillance laws, urging states to revise and amend these laws to comply with international human rights norms and standards governing the rights to privacy, free expression, peaceful assembly, and freedom of association. The UN Cybercrime Convention must be radically amended to avoid entrenching and expanding these existing abuses globally. If not amended, it must be rejected outright.

How the Convention Fails to Protect Human Rights in Domestic Surveillance

The idea that checks and balances are essential to avoid abuse of power is a basic “Government 101” concept. Yet throughout the negotiation process, Russia and its allies have sought to chip away at the already-weakened human rights safeguards and conditions outlined in Article 24 of the proposed Convention. 

Article 24 as currently drafted requires that every country that agrees to this convention must ensure that when it creates, uses, or applies the surveillance powers and procedures described in the domestic procedural measures, it does so under its own laws. These laws must protect human rights and comply with international human rights law. The principle of proportionality must be respected, meaning any surveillance measures should be appropriate and not excessive in relation to the legitimate aim pursued.

Why Article 24 Falls Short?

1. The Critical Missing Principles

While incorporation of the principle of proportionality in Article 24(1) is commendable, the article still fails to explicitly mention the principles of legality, necessity, and non-discrimination, which hold equivalent status to proportionality in human rights law relative to surveillance activities. A primer:

  • The principle of legality requires that restrictions on human rights including the right to privacy be authorized by laws that are clear, publicized, precise, and predictable, ensuring individuals understand what conduct might lead to restrictions on their human rights.
  • The principles of necessity and proportionality ensure that any interference with human rights is demonstrably necessary to achieving a legitimate aim and only include measures that are proportionate to that aim.
  • The principle of non-discrimination requires that laws, policies and human rights obligations be applied equally and fairly to all individuals, without any form of discrimination based on race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status, including the application of surveillance measures.

Without including all these principles, the safeguards are incomplete and inadequate, increasing the risk of misuse and abuse of surveillance powers.

2. Inadequate Specific Safeguards 

Article 24(2) requires countries to include, where “appropriate,” specific safeguards like:

  • judicial or independent review, meaning surveillance actions must be reviewed or authorized by a judge or an independent regulator.
  • the right to an effective remedy, meaning people must have ways to challenge or seek remedy if their rights are violated.
  • justification and limits, meaning there must be clear reasons for using surveillance and limits on how much surveillance can be done and for how long.

Article 24 (2) introduces three problems:

2.1 The Pitfalls of Making Safeguards Dependent on Domestic Law

Although these safeguards are mentioned, making them contingent on domestic law can vastly weaken their effectiveness, as national laws vary significantly and many of them won’t provide adequate protections. 

2.2 The Risk of Ambiguous Terms Allowing Cherry-Picked Safeguards

The use of vague terms like “as appropriate” in describing how safeguards will apply to individual procedural powers allows for varying interpretations, potentially leading to weaker protections for certain types of data in practice. For example, many states provide minimal or no safeguards for accessing subscriber data or traffic data despite the intrusiveness of resulting surveillance practices. These powers have been used to identify anonymous online activity, to locate and track people, and to map people’s contacts. By granting states broad discretion to decide which safeguards to apply to different surveillance powers, the convention fails to ensure the text will be implemented in accordance with human rights law. Without clear mandatory requirements, there is a real risk that essential protections will be inadequately applied or omitted altogether for certain specific powers, leaving vulnerable populations exposed to severe rights violations. Essentially, a country could just decide that some human rights safeguards are superfluous for a particular kind or method of surveillance, and dispense with them, opening the door for serious human rights abuses.

2.3 Critical Safeguards Missing from Article 24(2)

The need for prior judicial authorization, for transparency, and for user notification is critical to any effective and proportionate surveillance power, but not included in Article 24(2).

Prior judicial authorization means that before any surveillance action is taken, it must be approved by a judge. This ensures an independent assessment of the necessity and proportionality of the surveillance measure before it is implemented. Although Article 24 mentions judicial or other independent review, it lacks a requirement for prior judicial authorization. This is a significant omission that increases the risk of abuse and infringement on individuals' rights. Judicial authorization acts as a critical check on the powers of law enforcement and intelligence agencies.

Transparency involves making the existence and extent of surveillance measures known to the public; people must be fully informed of the laws and practices governing surveillance so that they can hold authorities accountable. Article 24 lacks explicit provisions for transparency, so surveillance measures could be conducted in secrecy, undermining public trust and preventing meaningful oversight. Transparency is essential for ensuring that surveillance powers are not misused and that individuals are aware of how their data might be collected and used.

User notification means that individuals who are subjected to surveillance are informed about it, either at the time of the surveillance or afterward when it no longer jeopardizes the investigation. The absence of a user notification requirement in Article 24(2) deprives people of the opportunity to challenge the legality of the surveillance or seek remedies for any violations of their rights. User notification is a key component of protecting individuals’ rights to privacy and due process. It may be delayed, with appropriate justification, but it must still eventually occur and the convention must recognize this.

Independent oversight involves monitoring by an independent body to ensure that surveillance measures comply with the law and respect human rights. This body can investigate abuses, provide accountability, and recommend corrective actions. While Article 24 mentions judicial or independent review, it does not establish a clear mechanism for ongoing independent oversight. Effective oversight requires a dedicated, impartial body with the authority to review surveillance activities continuously, investigate complaints, and enforce compliance. The lack of a robust oversight mechanism weakens the framework for protecting human rights and allows potential abuses to go unchecked.

Conclusion

While it’s somewhat reassuring that Article 24 acknowledges the binding nature of human rights law and its application to surveillance powers, it is utterly unacceptable how vague the article remains about what that actually means in practice. The “as appropriate” clause is a dangerous loophole, letting states implement intrusive powers with minimal limitations and no prior judicial authorization, only to then disingenuously claim this was “appropriate.” This is a blatant invitation for abuse. There’s nothing “appropriate” about this, and the convention must be unequivocally clear about that.

This draft in its current form is an egregious betrayal of human rights and an open door to unchecked surveillance and systemic abuses. Unless these issues are rectified, Member States must recognize the severe flaws and reject this dangerous convention outright. The risks are too great, the protections too weak, and the potential for abuse too high. It’s long past time to stand firm and demand nothing less than a convention that genuinely safeguards human rights.

Check out our detailed analysis on the criminalization of security research activities under the UN Cybercrime Convention. Stay tuned for our next post, where we'll explore other critical areas affected by the convention, including its scope and human rights safeguards.




If Not Amended, States Must Reject the Flawed Draft UN Cybercrime Convention Criminalizing Security Research and Certain Journalism Activities

This is the first post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

The latest and nearly final version of the proposed UN Cybercrime Convention—dated May 23, 2024 but released today June 14—leaves security researchers’ and investigative journalists’ rights perilously unprotected, despite EFF’s repeated warnings.

The world benefits from people who help us understand how technology works and how it can go wrong. Security researchers, whether independently or within academia or the private sector, perform this important role of safeguarding information technology systems. Relying on the freedom to analyze, test, and discuss IT systems, researchers identify vulnerabilities that can cause major harms if left unchecked. Similarly, investigative journalists and whistleblowers play a crucial role in uncovering and reporting on matters of significant public interest including corruption, misconduct, and systemic vulnerabilities, often at great personal risk.

For decades, EFF has fought for security researchers and journalists, provided legal advice to help them navigate murky criminal laws, and advocated for their right to conduct security research without fear of legal repercussions. We’ve helped researchers when they’ve faced threats for performing or publishing their research, including identifying and disclosing critical vulnerabilities in systems. We’ve seen how vague and overbroad laws on unauthorized access have chilled good-faith security research, threatening those who are trying to keep us safe or report on public interest topics. 

Now, just as some governments have individually finally recognized the importance of protecting security researchers’ work, many of the UN convention’s criminalization provisions threaten to spread antiquated and ambiguous language around the world with no meaningful protections for researchers or journalists. If these and other issues are not addressed, the convention poses a global threat to cybersecurity and press freedom, and UN Member States must reject it.

This post will focus on one critical aspect of coders’ rights under the newest released text: the provisions that jeopardize the work of security researchers and investigative journalists. In subsequent posts, Wwe will delve into other aspects of the convention in later posts.

How the Convention Fails to Protect Security Research and Reporting on Public Interest Matters

What Provisions Are We Discussing?

Articles 7 to 11 of the Criminalization Chapter—covering illegal access, illegal interception, interference with electronic data, interference with ICT systems, and misuse of devices—are core cybercrimes of which security researchers often have been accused of such offenses as a result of their work. (In previous drafts of the convention, these were articles 6-10).

  • Illegal Access (Article 7): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization to identify vulnerabilities.
  • Illegal Interception (Article 8): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require malicious criminal intent (mens rea).
  • Interference with Data (Article 9) and Interference with Computer Systems (Article 10): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks that could be described as “interference” even though they don’t cause harm and are performed without criminal malicious intent.

All of these articles fail to include a mandatory element of criminal intent to cause harm, steal, or defraud. A requirement that the activity cause serious harm is also absent from Article 10 and optional in Article 9. These safeguards must be mandatory.

What We Told the UN Drafters of the Convention in Our Letter?

Earlier this year, EFF submitted a detailed letter to the drafters of the UN Cybercrime Convention on behalf of 124 signatories, outlining essential protections for coders. 

Our recommendations included defining unauthorized access to include only those accesses that bypass security measures, and only where such security measures count as effective. The convention’s existing language harks back to cases where people were criminally prosecuted just for editing part of a URL.

We also recommended ensuring that criminalization of actions requires clear malicious or dishonest intent to harm, steal, or infect with malware. And we recommended explicitly exempting good-faith security research and investigative journalism on issues of public interest from criminal liability.

What Has Already Been Approved?

Several provisions of the UN Cybercrime Convention have been approved ad referendum. These include both complete articles and specific paragraphs, indicating varying levels of consensus among the drafters.

Which Articles Has Been Agreed in Full

The following articles have been agreed in full ad referendum, meaning the entire content of these articles has been approved:

    • Article 9: Interference with Electronic Data
    • Article 10: Interference with ICT Systems
    • Article 11: Misuse of Devices 
    • Article 28(4): Search and Seizure Assistance Mandate

We are frustrated to see, for example, that Article 11 (misuse of devices) has been accepted without any modification, and so continues to threaten the development and use of cybersecurity tools. Although it criminalizes creating or obtaining these tools only for purposes of violations of other crimes defined in Articles 7-10 (covering illegal access, illegal interception, interference with electronic data, and interference with ICT systems), those other articles lack mandatory criminal intent requirements and a requirement to define “without right” as bypassing an effective security measure. Because those articles do not specifically exempt activities such as security testing, Article 11 may inadvertently criminalize security research and investigative journalism. It may punish even making or using tools for research purposes if the research, such as security testing, is considered to fall under one of the other crimes.

We are also disappointed that Article 28(4) has also been approved ad referendum. This article could disproportionately empower authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. As we have written before, this provision can be abused to force security experts, software engineers, tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees—under threat of criminal prosecution—to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with data requests to the extent of their abilities.

Which Provisions Has Been Partially Approved?

The broad prohibitions against unauthorized access and interception have already been approved ad referendum, which means:

  • Article 7: Illegal Access (first paragraph agreed ad referendum)
  • Article 8: Illegal Interception (first paragraph agreed ad referendum)

The first paragraph of each of these articles includes language requiring countries to criminalize accessing systems or data or intercepting “without right.” This means that if someone intentionally gets into a computer or network without authorization, or performs one of the other actions called out in subsequent articles, it should be considered a criminal offense in that country. The additional optional requirements, however, are crucial for protecting the work of security researchers and journalists, and are still on the negotiating table and worth fighting for.  

What Has Not Been Agreed Upon Yet?

There is no agreement yet on Paragraph 2 of Article 7 on Illegal Access and Article 8 on illegal interception, which give countries the option to add specific requirements that can vary from article to article. Such safeguards could provide necessary clarifications to prevent criminalization of legal activities and ensure that laws are not misapplied to stifle research, innovation, and reporting on public interest matters. We made clear throughout this negotiation process that these conditions are a crucially important part of all domestic legislation pursuant to the convention. We’re disappointed to see that states have failed to act on any of our recommendations, including the letter we sent in February.

The final text dated May 23, 2024 of the convention is conspicuously silent on several crucial protections for security researchers:

  • There are no explicit exemptions for security researchers or investigative journalists who act in good faith.
  • The requirement for malicious intent remains optional rather than mandatory, leaving room for broad and potentially abusive interpretations.
  • The text does not specify that bypassing security measures should only be considered unauthorized if those measures are effective, nor make that safeguard mandatory.

How Has Similar Phrasing Caused Problems in the Past?

There is a history of overbroad interpretation under laws such as the United States’ Computer Fraud and Abuse Act, and this remains a significant concern with similarly vague language in other jurisdictions. This can also raise concerns well beyond researchers’ and journalists’ work, as when such legislation is invoked by one company to hinder a competitor’s ability to access online systems or create interoperable technologies. EFF’s paper, “Protecting Security Researchers' Rights in the Americas,” has documented numerous instances in which security researchers faced legal threats for their work:

  • MBTA v. Anderson (2008): The Massachusetts Bay Transit Authority (MBTA) used a  cybercrime law to sue three college students who were planning to give a presentation about vulnerabilities in Boston’s subway fare system.
  • Canadian security researcher (2018): A 19-year-old Canadian was accused of unauthorized use of a computer service for downloading public records from a government website.
  • LinkedIn’s cease and desist letter to hiQ Labs, Inc. (2017): LinkedIn invoked cybercrime law against hiQ Labs for “scraping” — accessing publicly available information on LinkedIn’s website using automated tools. Questions and cases related to this topic have continued to arise, although an appeals court ultimately held that scraping public websites does not violate the CFAA. 
  • Canadian security researcher (2014): A security researcher demonstrated a widely known vulnerability that could be used against Canadians filing their taxes. This was acknowledged by the tax authorities and resulted in a delayed tax filing deadline. Although the researcher claimed to have had only positive intentions, he was charged with a cybercrime.
  • Argentina’s prosecution of Joaquín Sorianello (2015): Software developer Joaquín Sorianello uncovered a vulnerability in election systems and faced criminal prosecution for demonstrating this vulnerability, even though the government concluded that he did not intend to harm the systems and did not cause any serious damage to them.

These examples highlight the chilling effect that vague legal provisions can have on the cybersecurity community, deterring valuable research and leaving critical vulnerabilities unaddressed.

Conclusion

The latest draft of the UN Cybercrime Convention represents a tremendous failure to protect coders’ rights. By ignoring essential recommendations and keeping problematic language, the convention risks stifling innovation and undermining cybersecurity. Delegates must push for urgent revisions to safeguard coders’ rightsandrights and ensure that the convention fosters, rather than hinders, the development of a secure digital environment. We are running out of time; action is needed now.

Stay tuned for our next post, in which we will explore other critical areas affected by the proposed convention including its scope and human rights safeguards. 

NETMundial+10 Multistakeholder Statement Pushes for Greater Inclusiveness in Internet Governance Processes

Par : Karen Gullo
23 mai 2024 à 17:55

A new statement about strengthening internet governance processes emerged from the NETMundial +10 meeting in Brazil last month, strongly reaffirming the value of and need for a multistakeholder approach involving full and balanced participation of all parties affected by the internet—from users, governments, and private companies to civil society, technologists, and academics.

But the statement did more than reiterate commitments to more inclusive and fair governance processes. It offered recommendations and guidelines that, if implemented, can strengthen multistakeholder principles as the basis for global consensus-building and democratic governance, including in existing multilateral internet policymaking efforts.


The event and statement, to which EFF contributed with dialogue and recommendations, is a follow-up to the 2014 NETMundial meeting, which ambitiously sought to consolidate multistakeholder processes to internet governance and recommended
10 process principles. It’s fair to say that over the last decade, it’s been an uphill battle turning words into action.

Achieving truly fair and inclusive multistakeholder processes for internet governance and digital policy continues to face many hurdles.  Governments, intergovernmental organizations, international standards bodies, and large companies have continued to wield their resources and power. Civil society
  organizations, user groups, and vulnerable communities are too often sidelined or permitted only token participation.

Governments often tout multistakeholder participation, but in practice, it is a complex task to achieve. The current Ad Hoc Committee negotiations of the proposed
UN Cybercrime Treaty highlight the complexity and controversy of multistakeholder efforts. Although the treaty negotiation process was open to civil society and other nongovernmental organizations (NGOs), with positive steps like tracking changes to amendments, most real negotiations occur informally, excluding NGOs, behind closed doors.

This reality presents a stark contrast and practical challenge for truly inclusive multistakeholder participation, as the most important decisions are made without full transparency and broad input. This demonstrates that, despite the appearance of inclusivity, substantive negotiations are not open to all stakeholders.

Consensus building is another important multistakeholder goal but faces significant practical challenges because of the human rights divide among states in multilateral processes. For example, in the context of the Ad Hoc Committee, achieving consensus has remained largely unattainable because of stark differences in human rights standards among member States. Mechanisms for resolving conflicts and enabling decision-making should consider human rights laws to indicate redlines. In the UN Cybercrime Treaty negotiations, reaching consensus could potentially lead to a race to the bottom in human rights and privacy protections.

To be sure, seats at the policymaking table must be open to all to ensure fair representation. Multi-stakeholder participation in multilateral processes allows, for example, civil society to advocate for more human rights-compliant outcomes. But while inclusivity and legitimacy are essential, they alone do not validate the outcomes. An open policy process should always be assessed against the specific issue it addresses, as not all issues require global regulation or can be properly addressed in a specific policy or governance venue.

The
NETmundial+10 Multistakeholder Statement, released April 30 following a two-day gathering in São Paulo of 400 registered participants from 60 countries, addresses issues that have prevented stakeholders, especially the less powerful, from meaningful participation, and puts forth guidelines aimed at making internet governance processes more inclusive and accessible to diverse organizations and participants from diverse regions.

For example, the 18-page statement contains recommendations on how to strengthen inclusive and diverse participation in multilateral processes, which includes State-level policy making and international treaty negotiations. Such guidelines can benefit civil society participation in, for example, the UN Cybercrime Treaty negotiations. EFF’s work with international allies in the UN negotiating process is outlined here.

The NETmundial statement takes asymmetries of power head on, recommending that governance processes provide stakeholders with information and resources and offer capacity-building to make these processes more accessible to those from developing countries and underrepresented communities. It sets more concrete guidelines and process steps for multistakeholder collaboration, consensus-building, and decision-making, which can serve as a roadmap in the internet governance sphere.

The statement also recommends strengthening the UN-convened Internet Governance Forum (IGF), a predominant venue for the frank exchange of ideas and multistakeholder discussions about internet policy issues. The multitude of initiatives and pacts around the world dealing with internet policy can cause duplication, conflicting outcomes, and incompatible guidelines, making it hard for stakeholders, especially those from the Global South, to find their place. 


The IGF could strengthen its coordination and information sharing role and serve as a venue for follow up of multilateral digital policy agreements. The statement also recommended improvements in the dialogue and coordination between global, regional, and national IGFs to establish continuity between them and bring global attention to local perspectives.

We were encouraged to see the statement recommend that IGF’s process for selecting its host country be transparent and inclusive and take into account human rights practices to create equitable conditions for attendance.

EFF and 45 digital and human rights organizations last year called on the UN Secretary-General and other decision-makers to reverse their decision to grant host status for the 2024 IGF to Saudi Arabia, which has a long history of human rights violations, including the persecution of human and women’s rights defenders, journalists, and online activists. Saudi Arabia’s draconian cybercrime laws are a threat to the safety of civil society members who might consider attending an event there.  

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

Par : Karen Gullo
7 février 2024 à 10:57

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious
exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people
globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirementincluding an intent to cause damage, defraud, or harmis needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for
unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)a minor infraction ordinarily considered a civil issuecould be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as c
hanging one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblockingwhich can be done innocently simply by changing locationshould not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers
whether within organizations or independentdiscover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of
Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)

Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)
Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl,
Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundacin Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

Par : Karen Gullo
7 février 2024 à 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

Par : Karen Gullo
29 janvier 2024 à 12:42

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

EFF and More Than 100+ NGOS Set Non-Negotiable Redlines Ahead of UN Cybercrime Treaty Negotiations

Par : George Wong
23 janvier 2024 à 09:44

EFF has joined forces with 110 NGOs today in a joint statement delivered to the United Nations Ad Hoc Committee, clearly outlining civil society non-negotiable redlines for the proposed UN Cybercrime Treaty, and asserting that states should reject the proposed treaty if these essential changes are not implemented. 

The last draft published on November 6, 2023 does not adequately ensure adherence to human rights law and standards. Initially focused on cybercrime, the proposed Treaty has alarmingly evolved into an expansive surveillance tool.

Katitza Rodriguez, EFF Policy Director for Global Privacy, asserts ahead of the upcoming concluding negotiations:

The proposed treaty needs more than just minor adjustments; it requires a more focused, narrowly defined approach to tackle cybercrime. This change is essential to prevent the treaty from becoming a global surveillance pact rather than a tool for effectively combating core cybercrimes. With its wide-reaching scope and invasive surveillance powers, the current version raises serious concerns about cross-border repression and potential police overreach. Above all, human rights must be the treaty's cornerstone, not an afterthought. If states can't unite on these key points, they must outright reject the treaty.

Historically, cybercrime legislation has been exploited to target journalists and security researchers, suppress dissent and whistleblowers, endanger human rights defenders, limit free expression, and justify unnecessary and disproportionate state surveillance measures. We are concerned that the proposed Treaty, as it stands now, will exacerbate these problems. The proposed treaty concluding session will be held at the UN Headquarters in New York from January 29 to February 10th. EFF will be attending in person.

The joint statement specifically calls States to narrow the scope of criminalization provisions to well defined cyber dependent crimes; shield security researchers, whistleblowers, activists, and journalists from being prosecuted for their legitimate activities; explicitly include language on international human rights law, data protection, and gender mainstreaming; limit the scope of the domestic criminal procedural measures and international cooperation to core cybercrimes established in the criminalization chapter; and address concerns that the current draft could weaken cybersecurity and encryption. Additionally, it requires the necessity to establish specific safeguards, such as the principles of prior judicial authorization, necessity, legitimate aim, and proportionality.

UAE Confirms Trial Against 84 Detainees; Ahmed Mansoor Suspected Among Them

10 janvier 2024 à 05:51

The UAE confirmed this week that it has placed 84 detainees on trial, on charges of “establishing another secret organization for the purpose of committing acts of violence and terrorism on state territory.” Suspected to be among those facing trial is award-winning human rights defender Ahmed Mansoor, also known as the “the million dollar dissident,” as he was once the target of exploits that exposed major security flaws in Apple’s iOS operating system—the kind of “zero-day” vulnerabilities that fetch seven figures on the exploit market. Mansoor drew the ire of UAE authorities for criticizing the country’s internet censorship and surveillance apparatus and for calling for a free press and democratic freedoms in the country.

Having previously been arrested in 2011 and sentenced to three years' imprisonment for “insulting officials,'' Ahmed Mansoor was released after eight months due to a presidential pardon influenced by international pressure. Later, Mansoor faced new speech-related charges for using social media to “publish false information that harms national unity.” During this period, authorities held him in an unknown location for over a year, deprived of legal representation, before convicting him again in May 2018 to ten years in prison under the UAE’s draconian cybercrime law. We have long advocated for his release, and are joined in doing so by hundreds of digital and human rights organizations around the world.

At the recent COP28 climate talks, Human Rights Watch and Amnesty International and other activists conducted a protest inside the UN-protected “blue zone” to raise awareness of Mansoor’s plight, as well the cases of both UAE detainee Mohamed El-Siddiq and Egyptian-British activist  Alaa Abd El Fattah. At the same time, it was reported by a dissident group that the UAE was proceeding with the trial against 84 of its detainees.

We reiterate our call for Ahmed Mansoor’s freedom, and take this opportunity to raise further awareness of the oppressive nature of the legislation that was used to imprison him. The UAE’s use of its criminal law to silence those who speak truth to power is another example of how counter-terrorism laws restrict free expression and justify disproportionate state surveillance. This concern is not hypothetical; a 2023 study by the Special Rapporteur on counter-terrorism found widespread and systematic abuse of civil society and civic space through the use of similar laws supposedly designed to counter terrorism. Moreover, and problematically, references 'related to terrorism’ in the treaty preamble are still included in the latest version of a proposed United Nations Cybercrime Treaty, currently being negotiated with more than 190 member states, even though there is no  agreed-upon definition of terrorism in international law. If approved as currently written, the UN Cybercrime Treaty has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide, and could enable States to justify repressive measures that overly restrict free expression and peaceful dissent.

Latest Draft of UN Cybercrime Treaty Is A Big Step Backward

1 décembre 2023 à 16:49

A new draft of the controversial United Nations Cybercrime Treaty has only heightened concerns that the treaty will criminalize expression and dissent, create extensive surveillance powers, and facilitate cross-border repression. 

The proposed treaty, originally aimed at combating cybercrime, has morphed into an expansive surveillance treaty, raising the risk of overreach in both national and international investigations. The new draft retains a controversial provision allowing states to compel engineers or employees to undermine security measures, posing a threat to encryption.  

This new draft not only disregards but also deepens our concerns, empowering nations to cast a wider net by accessing data stored by companies abroad, potentially in violation of other nations’ privacy laws. It perilously broadens its scope beyond the cybercrimes specifically defined in the Convention, encompassing a long list of non-cybercrimes. This draft retains the concerning issue of expanding the scope of evidence collection and sharing across borders for any serious crime, including those crimes that blatantly violate human rights law. Furthermore, this new version overreaches in investigating and prosecuting crimes beyond those detailed in the treaty; until now such power was limited to only the crimes defined in article 6-16 of the convention.  

We are deeply troubled by the blatant disregard of our input, which moves the text further away from consensus. This isn't just an oversight; it's a significant step in the wrong direction. 

Initiated in 2022, treaty negotiations have been marked by ongoing disagreements between governments on the treaty’s scope and on what role, if any, human rights should play in its design and implementation. The new draft was released Tuesday, Nov. 28; governments will hold closed-door talks December 19-20 in Vienna, in an attempt to reach consensus on what crimes to include in the treaty, and the draft will be considered at the final negotiating session in New York at the end of January 2024, when it’s supposed to be finalized and adopted.  

Deborah Brown, Human Rights Watch’s acting associate director for technology and human rights, said this latest draft

“is primed to facilitate abuses on a global scale, through extensive cross border powers to investigate virtually any imaginable ‘crime’ – like peaceful dissent or expression of sexual orientation – while undermining the treaty’s purpose of addressing genuine cybercrime. Governments should not rush to conclude this treaty without ensuring that it elevates, rather than sacrifices, our fundamental rights.” 

The Growing Threat of Cybercrime Law Abuse: LGBTQ+ Rights in MENA and the UN Cybercrime Draft Convention

This is Part II  of a series examining the proposed UN Cybercrime Treaty in the context of LGBTQ+ communities. Part I looks at the draft Convention’s potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

In the digital age, the rights of the LGBTQ+ community in the Middle East and North Africa (MENA) are gravely threatened by expansive cybercrime and surveillance legislation. This reality leads to systemic suppression of LGBTQ+ identities, compelling individuals to censor themselves for fear of severe reprisal. This looming threat becomes even more pronounced in countries like Iran, where same-sex conduct is punishable by death, and Egypt, where merely raising a rainbow flag can lead to being arrested and tortured.

Enter the proposed UN Cybercrime Convention. If ratified in its present state, the convention might not only bolster certain countries' domestic surveillance powers to probe actions that some nations mislabel as crimes, but it could also strengthen and validate international collaboration grounded in these powers. Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures for acts that are in stark contradiction with international human rights law. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. 

The draft convention, in Article 35, permits each country to define its own crimes under domestic laws when requesting assistance from other nations in cross-border policing and evidence collection. In certain countries, many of these criminal laws might be based on subjective moral judgments that suppress what is considered free expression in other nations, rather than adhering to universally accepted standards.

Indeed, international cooperation is permissible for crimes that carry a penalty of four years of imprisonment or more; there's a concerning move afoot to suggest reducing this threshold to merely three years. This is applicable whether the alleged offense is cyber or not. Such provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in some jurisdictions. 

While some countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. The draft treaty allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the MLAT system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the checks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. And where countries do share the same subjective values and problematically criminalize the same conduct, this draft treaty seemingly provides a justification for their cooperation.

One of the more recently introduced pieces of legislation that exemplifies these issues is the Cybercrime Law of 2023 in Jordan. Introduced as part of King Abdullah II’s modernization reforms to increase political participation across Jordan, this law was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. In addition to this new law, the pre-existing cybercrime law in Jordan has already been used against LGBTQ+ people, and this new law expands its capacity to do so. This law, with its overly broad and vaguely defined terms, will severely restrict individual human rights across that country and will become a tool for prosecuting innocent individuals for their online speech. 

Article 13 of the Jordan law expansively criminalizes a wide set of actions tied to online content branded as “pornographic,” from its creation to distribution. The ambiguity in defining what is pornographic could inadvertently suppress content that merely expresses various sexualities, mistakenly deeming them as inappropriate. This goes beyond regulating explicit material; it can suppress genuine expressions of identity. The penalty for such actions entails a period of no less than six months of imprisonment. 

Meanwhile, the nebulous wording in Article 14 of Jordan's laws—terms like “expose public morals,” “debauchery,” and “seduction”—is equally concerning. Such vague language is ripe for misuse, potentially curbing LGBTQ+ content by erroneously associating diverse sexual orientation with immorality. Both articles, in their current form, cast shadows on free expression and are stark reminders that such provisions can lead to over-policing online content that is not harmful at all. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. Deputy Leader of the Opposition, Saleh al Armouti, went further and claimed that “Jordan will become a big jail.” 

Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their rights to freedom of expression and privacy. Article 12 of the Cybercrime Law prohibits the use of Virtual Private Networks (VPNs) and other proxies, with at least six months imprisonment or a fine for violations. 

This will force people in Jordan to choose between engaging in free online expression or keeping their personal identity private. More specifically, this will negatively impact LGBTQ+ people and human rights defenders in Jordan who particularly rely on VPNs and anonymity to protect themselves online. The impact of Article 12 is exacerbated by the fact that there is no comprehensive data privacy legislation in Jordan to protect people’s rights during cyber attacks and data breaches.  

This is not the first time Jordan has limited access to information and content online. In December 2022, Jordanian authorities blocked TikTok to prevent the dissemination of live updates and information during the workers’ protests in the country's south, and authorities there previously had blocked Clubhouse as well

This crackdown on free speech has particularly impacted journalists, such as the recent arrest of Jordanian journalist Heba Abu Taha for criticizing Jordan’s King over his connections with Israel. Given that online platforms like TikTok and Twitter are essential for activists, organizers, journalists, and everyday people around the world to speak truth to power and fight for social justice, the restrictions placed on free speech by Jordan’s new Cybercrime Law will have a detrimental impact on political activism and community building across Jordan.

People across Jordan have protested the law and the European Union has  expressed concern about how the law could limit freedom of expression online and offline. In August, EFF and 18 other civil society organizations wrote to the King of Jordan, calling for the rejection of the country’s draft cybercrime legislation. With the law now in effect, we urge Jordan to repeal the Cybercrime Law 2023.

Jordan’s Cybercrime Law has been said to be a “true copy” of the United Arab Emirates (UAE) Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes. This law replaced its predecessor, which had been used to stifle expression critical of the government or its policies—and was used to sentence human rights defender Ahmed Mansoor to 10 years in prison. 

The UAE’s new cybercrime law further restricts the already heavily-monitored online space and makes it harder for ordinary citizens, as well as journalists and activists, to share information online. More specifically, Article 22 mandates prison sentences of between three and 15 years for those who use the internet to share “information not authorized for publishing or circulating liable to harm state interests or damage its reputation, stature, or status.” 

In September 2022, Tunisia passed its new cybercrime law in Decree-Law No. 54 on “combating offenses relating to information and communication systems.” The wide-ranging decree has been used to stifle opposition free speech, and mandates a five-year prison sentence and a fine for the dissemination of “false news” or information that harms “public security.” In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets and individuals for their opposition to government policies or officials. 

The first criminal investigation under Decree-Law 54 saw the arrest of student Ahmed Hamada in October 2022 for operating a Facebook page that reported on clashes between law enforcement and residents of a neighborhood in Tunisia. 

Similar tactics are being used in Egypt, where the 2018 cybercrime law, Law No. 175/2018, contains broad and vague provisions to silence dissent, restrict privacy rights, and target LGBTQ+ individuals. More specifically, Articles 25 and 26 have been used by the authorities to crackdown on content that allegedly violates “family values.” 

Since its enactment, these provisions have also been used to target LGBTQ+ individuals across Egypt, particularly regarding the publication or sending of pornography under Article 8, as well as illegal access to an information network under Article 3. For example, in March 2022 a court in Egypt charged singers Omar Kamal and Hamo Beeka with “violating family values” for dancing and singing in a video uploaded to YouTube. In another example, police have used cybercrime laws to prosecute LGBTQ+ individuals for using dating apps such as Grindr.

And in Saudi Arabia, national authorities have used cybercrime regulations and counterterrorism legislation to prosecute online activism and stifle dissenting opinions. Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online—for composing a tweet, liking a Facebook post, or writing a blog post. And while Saudi Arabia has no specific law concerning gender identity and sexual orientation, authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.” 

These provisions have been used to prosecute individuals for peaceful actions, particularly since the Arab Spring in 2011. More recently, in August 2022, Salma al-Shehab was sentenced to 34 years in prison with a subsequent 34-year travel ban for her alleged “crime” of sharing content in support of prisoners of conscience and women human rights defenders.

These cybercrime laws demonstrate that if the proposed UN Cybercrime Convention is ratified in its current form with its broad scope, it would authorize domestic surveillance for the investigation of any offenses, as those in Articles 12, 13, and 14 of Jordan's law. Additionally, the convention could authorize international cooperation for investigation of crimes penalized with three or four years of imprisonment, as seen in countries such as the UAE, Tunisia, Egypt, and Saudi Arabia.

As Canada warned (at minute 01:56 ) at the recent negotiation session, these expansive provisions in the Convention permit states to unilaterally define and broaden the scope of criminal conduct, potentially paving the way for abuse and transnational repression. While the Convention may incorporate some procedural safeguards, its far-reaching scope raises profound questions about its compatibility with the key tenets of human rights law and the principles enshrined in the UN Charter. 

The root problem lies not in the severity of penalties, but in the fact that some countries criminalize behaviors and expression that are protected under international human rights law and the UN Charter. This is alarming, given that numerous laws affecting the LGBTQ+ community carry penalties within these ranges, making the potential for misuse of such cooperation profound.

In a nutshell, the proposed UN treaty amplifies the existing threats to the LGBTQ+ community. It endorses a framework where nations can surveil benign activities such as sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

Online, the lack of legal protection of subscriber data threatens the anonymity of the community, making them vulnerable to identification and subsequent persecution. The mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their identities being disclosed, putting them at significant risk.

Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be intercepted and turned against them. 

To all member states: This is a pivotal moment. This is our opportunity to ensure the digital future is one where rights are championed, not compromised. Pledge to protect the rights of all, especially those communities like the LGBTQ+ that are most vulnerable. The international community must unite in its commitment to ensure that the proposed convention serves as an instrument of protection, not persecution.



❌
❌