Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review

29 décembre 2024 à 05:50

This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.

Internet shutdowns

It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.

Repression of speech in times of conflict

But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.

Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.

In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.

Restrictions on content, age, and identity

Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.

While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.  

One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.

Cybercrime

We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Saving the Internet in Europe: How EFF Works in Europe

16 décembre 2024 à 11:32

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?

11 décembre 2024 à 09:00

The Brazilian Supreme Court is on the verge of deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. 

The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.

The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.

After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.

However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.

The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.  

Some Background About Marco Civil’s Intermediary Liability Regime

The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generally prioritize their economic interests and security over preserving users’ protected expression and over-remove content to avoid legal battles and regulatory scrutiny. The enforcement overreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.

The provision was in line with the Special Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.

A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges, changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.

The Cases Under Trial and The Reach of the Supreme Court’s Decision

The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.

In the second case, the plaintiff sued Facebook after the company didn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers. 

Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases. 

The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention. 

There’s a lot at stake for users’ rights online in the outcomes of these cases. 

The Many Perils and Pitfalls on the Way

Brazil’s platform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector, draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes. 

We’ve written about intermediary liability trends around the globe, how to move forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users. 

One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content. 

While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence of less-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.

But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring is hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based detection of copyright infringement that have deeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.  

Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages.  Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.  

Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.

Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use of takedown mechanisms to disappear critical content online.

It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request. 

As we said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants. 

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Amazon and Google Must Keep Their Promises on Project Nimbus

2 décembre 2024 à 14:52

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

On Alaa Abd El Fattah’s 43rd Birthday, the Fight For His Release Continues

18 novembre 2024 à 12:13

Today marks prominent British-Egyptian coder, blogger, activist, and political prisoner Alaa Abd El Fattah’s 43rd birthday—his eleventh behind bars. Alaa should have been released on September 29, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. Since September 29, Alaa’s mother, mathematician Leila Soueif, has been on hunger strike, while she and the rest of his family have worked to engage the British government in securing Alaa’s release.

Last November, an international counsel team acting on behalf of Alaa’s family filed an urgent appeal to the UN Working Group on Arbitrary Detention. EFF joined 33 other organizations in supporting the submission and urging the UNWGAD promptly to issue its opinion on the matter. Last week, we signed another letter urging the UNWGAD once again to issue an opinion.

Despite his ongoing incarceration, Alaa’s writing and his activism have continued to be honored worldwide. In October, he was announced as the joint winner of the PEN Pinter Prize alongside celebrated writer Arundhati Roy. His 2021 collection of essays, You Have Not Yet Been Defeated, has been re-released as part of Fitzcarraldo Editions’ First Decade Collection. Alaa is also the 2023 winner of PEN Canada’s One Humanity Award and the 2022 winner of EFF’s own EFF Award for Democratic Reform Advocacy.

EFF once again calls for Alaa Abd El Fattah’s immediate and unconditional release and urges the UN Working Group on Arbitrary Detention to promptly issue its opinion on his incarceration. We further urge the British government to take action to secure his release.

The UK Must Act: Alaa Abd El-Fattah Still Imprisoned 25 Days After Release Date

23 octobre 2024 à 13:30

It’s been 25 days since September 29, the day that should have seen British-Egyptian blogger, coder, and activist Alaa Abd El Fattah walk free. Egyptian authorities refused to release him at the end of his sentence, in contradiction of the country's own Criminal Procedure Code, which requires that time served in pretrial detention count toward a prison sentence. In the days since, Alaa’s family has been able to secure meetings with high-level British officials, including Foreign Secretary David Lammy, but as of yet, the Egyptian government still has not released Alaa.

In early October, Alaa was named the 2024 PEN Writer of Courage by PEN Pinter Prize winner Arundhati Roy, who presented the award in a ceremony where it was received by Egyptian publication Mada Masr editor Lina Attalah on Alaa’s behalf.

Alaa’s mother, Laila Soueif, is now on her third week of hunger strike and says that she won’t stop until Alaa is free or she’s taken to the hospital. In recent weeks, Alaa’s mothers and sisters have met with several members of Parliament in the hopes of placing more pressure on officials. As the BBC reports, his family are “deeply disappointed with how the current government, and the previous one, have handled his case” and believe that the UK has more leverage with Egypt that it is not using.

Alaa deserves to finally return to his family, now in the UK, and to be reunited with his son, Khaled, who is now a teenager. We urge EFF supporters in the UK to write to their MP (external link) to place pressure on the UK’s Labour government to use their power to push for Alaa’s release. 

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

15 octobre 2024 à 15:48

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

Par : Karen Gullo
10 octobre 2024 à 14:20

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

The X Corp. Shutdown in Brazil: What We Can Learn

8 octobre 2024 à 12:39

Update (10/8/2024): Brazil lifted a ban on the X Corp. social media platform today after the country's Supreme Court said the company had complied with all of its orders. Regulators have 24 hours to reinstate the platform, though it could take longer for it to come back online.

The feud between X Corp. and Brazil’s Supreme Court continues to drag on: After a month-long standoff, X Corp. folded and complied with court orders to suspend several accounts, name a legal representative in Brazil, and pay 28.6 million reais ($5.24 million) in fines. That hasn’t cleared the matter up, though.

The Court says X paid the wrong bank, which X denies. Justice Alexandre de Moraes has asked that the funds be redirected to the correct bank and for Brazil’s prosecutor general to weigh in on X’s requests to be reinstated in Brazil.

So the drama continues, as does the collateral damage to millions of Brazilian users who rely on X Corp. to share information and expression. While we watch it unfold, it’s not too early to draw some important lessons for the future.

Let’s break it down.

How We Got Here

The Players

Unlike courts in many countries, the Brazilian Supreme Court has the power to conduct its own investigations in limited circumstances, and issue orders based on its findings. Justice Moraes has drawn on this power frequently in the past few years to target what he called “digital militias,” anti-democratic acts, and fake news. Many in Brazil believe that these investigations, combined with other police work, have helped rein in genuinely dangerous online activities and protect the survival of Brazil’s democratic processes, particularly in the aftermath of January 2023 riots.

At the same time, Moraes’ actions have raised concerns about judicial overreach. For instance, his work is less than transparent. And the resulting content blocking orders more often than not demand suspension of entire accounts, rather than specific posts. Other leaked orders include broad requests for subscriber information of people who used a specific hashtag.

X Corp.’s controversial CEO, Elon Musk has publicly criticized the blocking orders. And while he may be motivated by concern for online expression, it is difficult to untangle that motivation from his personal support for the far-right causes Moraes and others believe threaten democracy in Brazil.

The Standoff

In August, as part of an investigation into coordinated actions to spread disinformation and destabilize Brazilian democracy, Moraes ordered X Corp. to suspend accounts that were allegedly used to intimidate and expose law enforcement officers. Musk refused, directly contradicting his past statements that X Corp. “can’t go beyond the laws of a country”—a stance that supposedly justified complying with controversial orders to block accounts and posts in Turkey and India.

After Moraes gave X Corp. 24 hours to fulfill the order or face fines and the arrest of one of its lawyers, Musk closed down the company’s operations in Brazil altogether. Moraes then ordered Brazilian ISPs to block the platform until Musk designated a legal representative. And people who used tools such as VPNs to circumvent the block can be fined 50,000 reais (approximately $ 9,000 USD) per day.

These orders remain in place unless or until pending legal challenges succeed. Justice Moraes has also authorized Brazil’s Federal Police to monitor “extreme cases” of X Corp. use. It’s unclear what qualifies as an “extreme case,” or how far the police may take that monitoring authority. Flagged users must be notified that X Corp. has been blocked in Brazil; if they continue to use it via VPNs or other means, they are on the hook for substantial daily fines.

A Bridge Too Far

Moraes’ ISP blocking order, combined with the user fines, has been understandably controversial. International freedom of expression standards treat these kinds of orders as extreme measures, permissible only in exceptional circumstances where provided by law and in accordance with necessary and proportionate principles. Justice Moraes said the blocking was necessary given upcoming elections and the risk that X Corp. would ignore future orders and allow the spread of disinformation.

But it has also meant that millions of Brazilians cannot access a platform that, for them, is a valuable source of information. Indeed, restrictions on accessing X Corp. ended up creating hurdles to understanding and countering electoral disinformation. The Brazilian Association of Newspapers has argued the restrictions adversely impact journalism. At the same time, online electoral disinformation holds steady on other platforms (while possibly at a slower pace).

Moreover, now that X Corp. has bowed to his demands, Moraes’ concerns that the company cannot be trusted to comply with Brazilian law are harder to justify. In any event, there are far more balanced options now to deal with the remaining fines that don’t create collateral damage to millions of users.

What Comes Next: Concerns and Open Questions

There are several structural issues that have helped fuel the conflict and exacerbated its negative effects. First, the mechanisms for legal review of Moraes’ orders are unclear and/or ineffective. The Supreme Court has previously held that X Corp. itself cannot challenge suspension of user accounts, thwarting a legal avenue for platforms to defend their users’ speech—even where they may be the only entities that even know about the order before accounts are shut down.

A Brazilian political party and the Federal Council of the Brazilian Bar Association filed legal challenges to the blocking order and user fines, respectively, but it is likely that courts will find these challenges procedurally improper as well.

Back in 2016, a single Supreme Court Justice held back a wave of blocking orders targeting WhatsApp. Eight years later, a single Justice may have created a new precedent in the opposite direction—with little or no means to appeal it.

Second, this case highlights what can happen when too much power is held by just a few people or institutions. On the one hand, in Brazil as elsewhere, a handful of wealthy corporations wield enormous power over online expression. Here, that problem is exacerbated by Elon Musk’s control of Starlink, an important satellite internet provider in Brazil.

On the other hand, the Supreme Court also has tremendous power. Although the court’s actions may have played an important role in preserving Brazilian democracy in recent years, powers that are not properly subject to public oversight or meaningful challenge invite overreach.

All of which speaks to a need for better transparency (in both the public and private sectors) and real checks and balances. Independent observers note that, despite challenges, Brazil has already improved its democratic processes. Strengthening this path includes preventing judicial overreach.

As for social media platforms, the best way to stave off future threats to online expression may be to promote more alternatives, so no single powerful person, whether a judge, a billionaire, or even a president, can dramatically restrict online expression with the stroke of a pen.

 

 

 

 

Los llamamientos para suprimir la Ley de Ciberdelincuencia de Jordania se hacen eco de los llamamientos para rechazar el Tratado sobre Ciberdelincuencia

In a number of countries around the world, communities—and particularly those that are already vulnerable—are threatened by expansive cybercrime and surveillance legislation. One of those countries is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

We’ve criticized this law before, noting how it was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. It broadly criminalizes online content labeled as “pornographic” or deemed to “expose public morals,” and prohibits the use of Virtual Private Networks (VPNs) and other proxies. Now, EFF has joined thirteen digital rights and free expression organizations in calling once again for Jordan to scrap the controversial cybercrime law.

The open letter, organized by Article 19, calls upon Jordanian authorities to cease use of the cybercrime law to target and punish dissenting voices and stop the crackdown on freedom of expression. The letter also reads: “We also urge the new Parliament to repeal or substantially amend the Cybercrime Law and any other laws that violate the right to freedom of expression and bring them in line with international human rights law.”

Jordan’s law is a troubling example of how overbroad cybercrime legislation can be misused to target marginalized communities and suppress dissent. This is the type of legislation that the U.N. General Assembly has expressed concern about, including in 2019 and 2021, when it warned against cybercrime laws being used to target human rights defenders. These concerns are echoed by years of reports from U.N. human rights experts on how abusive cybercrime laws facilitate human rights abuses.

The U.N. Cybercrime Treaty also poses serious threats to free expression. Far from protecting against cybercrime, this treaty risks becoming a vehicle for repressive cross-border surveillance practices. By allowing broad international cooperation in surveillance for any crime 'serious' under national laws—defined as offenses punishable by at least four years of imprisonment—and without robust mandatory safeguards or detailed operational requirements to ensure “no suppression” of expression, the treaty risks being exploited by government to suppress dissent and target marginalized communities, as seen with Jordan’s overbroad 2023 cybercrime law. The fate of the U.N. Cybercrime Treaty now lies in the hands of member states, who will decide on its adoption later this year.

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

Desvelando la represión en Venezuela: Un legado de vigilancia y control estatal

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

Britain Must Call for Release of British-Egyptian Activist and Coder Alaa Abd El Fattah

As British-Egyptian coder, blogger, and activist Alaa Abd El Fattah enters his fifth year in a maximum security prison outside Cairo, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa, we stand with his family and an ever-growing international coalition of supporters in calling for his release.

Alaa has over these five years endured beatings and solitary confinement. His family at times were denied visits or any contact with him. He went on a seven-month hunger strike in protest of his incarceration, and his family feared that he might not make it.

But global attention on his plight, bolstered by support from British officials in recent years, ultimately led to improved prison conditions and family visitation rights.

But let’s be clear: Egypt’s long-running retaliation against Alaa for his activism is a travesty and an arbitrary use of its draconian, anti-speech laws. He has spent the better part of the last 10 years in prison. He has been investigated and imprisoned under every Egyptian regime that has served in his lifetime. The time is long overdue for him to be freed.

Over 20 years ago Alaa began using his technical skills to connect coders and technologists in the Middle East to build online communities where people could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s successive repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights.

Alaa is a symbol for the principle of free speech in a region of the world where speaking out for justice and human rights is dangerous and using the power of technology to build community is criminalized. But he has also come to symbolize the oppression and cruelty with which the Egyptian government treats those who dare to speak out against authoritarianism and surveillance.

Egyptian authorities’ relentless, politically motivated pursuit of Alaa is an egregious display of abusive police power and lack of due process. He was first arrested and detained in 2006 for participating in a demonstration. He was arrested again in 2011 on charges related to another protest. In 2013 he was arrested and detained on charges of organizing a protest. He was eventually released in 2014, but imprisoned again after a judge found him guilty in absentia.

What diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family? - David Lammy

That same year he was released on bail, only to be re-arrested when he went to court to appeal his case. In 2015 he was sentenced to five years in prison and released in 2019. But he was re-arrested in a massive sweep of activists in Egypt while on probation and charged with spreading false news and belonging to a terrorist organization for sharing a Facebook post about human rights violations in prison. He was sentenced in 2021, after being held in pre-trial detention for more than two years, to five years in prison. September 29 will mark five years that he has spent behind bars.

While he’s been in prison an anthology of his writing, which was translated into English by anonymous supporters, was published in 2021 as You Have Not Yet Been Defeated, and he became a British citizen through his mother, the rights activist and mathematician Laila Soueif, that December.

Protesting his conditions, Alaa shaved his head and went on hunger strike beginning in April 2022. As he neared the third month of his hunger strike, former UK foreign secretary Liz Truss said she was working hard to secure his release. Similarly, then-PM Rishi Sunak wrote in a letter to Alaa’s sister, Sanaa Seif, that “the government is deeply committed to doing everything we can to resolve Alaa's case as soon as possible."

David Lammy, then a Member of Parliament and now Britain’s foreign secretary, asked Parliament in November 2022, “what diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family?” Lammy joined Alaa’s family during a sit-in outside of the Foreign Office.

When the UK government’s promises failed to come to fruition, Alaa escalated his hunger strike in the runup to the COP27 gathering. At the same time, a coordinated campaign led by his family and supported by a number of international organizations helped draw global attention to his plight, and ultimately led to improved prison conditions and family visitation rights.

But although Alaa’s conditions have improved and his family visitation rights have been secured, he remains wrongfully imprisoned, and his family fears that the Egyptian government has no intention of releasing him.

With Lammy, now UK Foreign Minister, and a new Labour government in place in the UK, there is renewed hope for Alaa’s release. Keir Starmer, Labour Leader and the new prime minister, has voiced his support for Fattah’s release.

The new government must make good on its pledge to defend British values and interests, and advocate for the release of its British citizen Alaa Fattah. We encourage British citizens to write to their MP (external link) and advocate for his release. His continued detention is debased. Egypt should face the sole of shoes around the world until Fattah is freed.

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

Par : Karen Gullo
1 août 2024 à 10:08

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime”
has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
31 juillet 2024 à 10:53

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For
an in-depth analysis, please read further.

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

Par : Karen Gullo
30 juillet 2024 à 18:44

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized
our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in
comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The
statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries,
said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Weak Human Rights Protections: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
30 juillet 2024 à 08:58

The proposed UN Cybercrime Convention dangerously undermines human rights, opening the door to unchecked cross-border surveillance and government overreach. Despite two and a half years of negotiations, the draft treaty authorizes extensive surveillance powers without robust safeguards, omitting essential data protection principles.

This risks turning international efforts to fight cybercrime into tools for human rights abuses and transnational repression.

Safeguards like prior judicial authorization call for a judge's approval of surveillance before it happens, ensuring the measure is legitimate, necessary and proportionate. Notifying individuals when their data is being accessed gives them an opportunity to challenge requests that they believe are disproportionate or unjustified.

Additionally, requiring states to publish statistical transparency reports can provide a clear overview of surveillance activities. These safeguards are not just legal formalities; they are vital for upholding the integrity and legitimacy of law enforcement activities in a democratic society.¸

Unfortunately the draft treaty is severely lacking in these protections. An article in the current draft about conditions and safeguards is vaguely written,
permitting countries to apply safeguards only "where appropriate," and making them dependent on States domestic laws, some of which have weak human rights protections.¸This means that the level of protection against abusive surveillance and data collection can vary widely based on each country's discretion.

Extensive surveillance powers must be reined in and strong human rights protections added. Without those changes, the proposed treaty unacceptably endangers human rights around the world and should not be approved.

Check out our
two detailed analyses about the lack of human rights safeguards in the draft treaty. 

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

Par : Karen Gullo
24 juillet 2024 à 22:06

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

play
Privacy info. This embed will serve content from youtube.com

❌
❌