Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

EFF Awards Night: Celebrating Digital Rights Founders Advancing Free Speech and Access to Information Around the World

Digital freedom and investigative reporting about technology have been at risk amid political and economic strife around the world. This year’s annual EFF Awards honored the achievements of people helping to ensure that the power of technology, the right to privacy and free speech, and access to information, is available to people all over the world. 

On September 12 in San Francisco’s Presidio, EFF presented awards to investigative news organization 404 Media, founder of Latin American digital rights group Fundación Karisma Carolina Botero, and Cairo-based nonprofit Connecting Humanity, which helps Palestinians in Gaza regain access to the internet.

All our award winners overcame roadblocks to build organizations that protect and advocate for people’s rights to online free speech, digital privacy, and the ability to live free from government surveillance.  

If you missed the ceremony in San Francisco, you can still catch what happened on YouTube and the Internet Archive. You can also find a transcript of the live captions.

Watch Now

EFF Awards Ceremony on YouTube

EFF Executive Director Cindy Cohn kicked off the ceremony, highlighting some of EFF’s recent achievements and milestones, including our How to Save the Internet podcast, now in its fifth season, which won two awards this year and saw a 21 percent increase in downloads. 

Cindy talked about EFF’s legal work defending a security researcher at this year’s DEF CON who was threatened for his planned talk about a security vulnerability he discovered. EFF’s Coders’ Rights team helped the researcher avoid a lawsuit and present his talk on the conference’s last day. Another win: EFF fought back to ensure that police drone footage was not exempt from public records requests. As a result, “we can see what the cops are seeing,” Cindy said.

Cindy Cohn speaks from a podium.

EFF Executive Director Cindy Cohn kicks off the ceremony.

“It can be truly exhausting and scary to feel the weight of the world’s problems on our shoulders, but I want to let you in on a secret,” she said. “You’re not alone, and we’re not alone. And, as a wise friend once said, courage is contagious.” 

Cindy turned the program over to guest speaker Elizabeth Minkel, journalist and co-host of the long-running fan culture podcast Fansplaining. Elizabeth kept the audience giggling as she recounted her personal fandom history with Buffy the Vampire Slayer and later Harry Potter, and how EFF’s work defending fair use and fighting copyright maximalism has helped fandom art and fiction thrive despite attacks from movie studios and entertainment behemoths.

Elizabeth Minkel speaks at a podium.

Elizabeth Minkel—co-host and editor of the Fansplaining podcast, journalist, and editor.

“The EFF’s fight for open creativity online has been helping fandom for longer than I’ve had an internet connection,” Minkel said. “Your values align with what I think of as the true spirit of transformative fandom, free and open creativity, and a strong push back against those copyright strangleholds in the homogenization of the web.”

Presenting the first award of the evening, EFF Director of Investigations Dave Maass took the stage to introduce 404 Media, winner of EFF’s Award for Fearless Journalism. The outlet’s founders were all tech journalists who worked together at Vice Media’s Motherboard when its parent company filed for bankruptcy in May 2023. All were out of a job, part of a terrible trend of reporter layoffs and shuttered news sites as media businesses struggle financially.

Journalists Jason Koebler, Sam Cole, Joseph Cox, and Emanuel Maiberg together resolved to go out on their own; in 2023 they started 404 Media, aiming to uncover stories about how technology impacts people in the real world.

Since its founding, journalist-owned 404 Media has published scoops on hacking, cyber security, cybercrime, artificial intelligence, and consumer rights. They uncovered the many ways tech companies and speech platforms sell users' data without their knowledge or consent to AI companies for training purposes. Their reporting led to Apple banning apps that help create non-consensual sexual AI imagery, and revealed a feature on New York city subway passes that enabled rider location tracking, leading the subway system to shut down the feature.

Jason Koebler delivers a video speech from home.

Jason Koebler remotely accepts the EFF Award for Fearless Journalism on behalf of 404 Media.

“We believe that there is a huge demand for journalism that is written by humans for other humans, and that real people do not want to read AI-generated news stories that are written for search engine optimization algorithms and social media,” said 404 Media's Jason Koebler in a video recorded for the ceremony.

EFF Director for International Freedom of Expression Jillian York introduced the next award recipient, Cairo-based nonprofit Connecting Humanity represented by Egyptian journalist and activist Mirna El Helbawi.

The organization collects and distributes embedded SIMs (eSIMs), a software version of the physical chip used to connect a phone to cellular networks and the internet. The eSIMS have helped thousands of Gazans stay digitally connected with family and the outside world, speak to loved ones at hospitals, and seek emergency help amid telecom and internet blackouts during Israel’s war with Hamas.

Connecting Humanity has distributed 400,000 eSIMs to people in Gaza since October. The eSIMS have been used to save families from under the rubble, allow people to resume their online jobs and attend online school, connect hospitals in Gaza, and assist journalists reporting on the ground, Mirna said.

Mirna El Helbawi accepts EFF Award on behalf of Connecting Humanity.

“This award is for Connecting Humanity’s small team of volunteers, who worked day and night to connect people in Gaza for the past 11 months and are still going strong,” she told the audience. “They are the most selfless people I have ever met. Not a single day has passed without this team doing their best to ensure that people are connecting in Gaza.”

EFF Policy Director for Global Privacy Katitza Rodriguez took the stage next to introduce the night’s final honoree, Fundación Karisma founder and former executive director Carolina Botero. A researcher, lecturer, writer, and consultant, Carolina is among the foremost leaders in the fight for digital rights in Latin America.

Karisma has worked since 2003 to put digital privacy and security on policymaking agendas in Colombia and the region and ensure that technology protects human rights.

She played a key role in helping to defeat a copyright law that would have brought a DMCA-like notice and takedown regime in Colombia, threatening free expression. Her opposition to the measure made her a target of government surveillance, but even under intense pressure from the government, she refused to back down.

Karisma and other NGOs proposed amending Brazil’s intelligence law to strengthen monitoring, transparency, and accountability mechanisms, and fought to increase digital security for human rights and environmental activists, who are often targets of government tracking.

Carolina Botero receives EFF Award from Katitza Rodriguez.

Carolina Botero receives the EFF Award for Fostering Digital Rights in Latin America.

“Quiet work is a particularly thankless aspect of our mission in countries like Colombia, where there are few resources and few capacities, and where these issues are not on the public agenda,” Carolina said in her remarks. She left her position at Karisma this year, opening the door for a new generation while leaving an inspiring digital rights legacy in Latin America in the fight for digital rights.

EFF is grateful that it can honor and lift up the important work of these award winners, who work both behind the scenes and in very public ways to protect online privacy, access to information, free expression, and the ability to find community and communicate with loved ones and the world on the internet.

The night’s honorees saw injustices, rights violations, and roadblocks to information and free expression, and did something about it. We thank them.

And thank you to all EFF members around the world who make our work possible—public support is the reason we can push for a better internet. If you're interested in supporting our work, consider becoming an EFF member! You can get special gear as a token of our thanks and help support the digital freedom movement.

Of course, special thanks to the sponsors of this year’s EFF Awards: Dropbox and Electric Capital.

Britain Must Call for Release of British-Egyptian Activist and Coder Alaa Abd El Fattah

As British-Egyptian coder, blogger, and activist Alaa Abd El Fattah enters his fifth year in a maximum security prison outside Cairo, unjustly charged for supporting online free speech and privacy for Egyptians and people across the Middle East and North Africa, we stand with his family and an ever-growing international coalition of supporters in calling for his release.

Alaa has over these five years endured beatings and solitary confinement. His family at times were denied visits or any contact with him. He went on a seven-month hunger strike in protest of his incarceration, and his family feared that he might not make it.

But global attention on his plight, bolstered by support from British officials in recent years, ultimately led to improved prison conditions and family visitation rights.

But let’s be clear: Egypt’s long-running retaliation against Alaa for his activism is a travesty and an arbitrary use of its draconian, anti-speech laws. He has spent the better part of the last 10 years in prison. He has been investigated and imprisoned under every Egyptian regime that has served in his lifetime. The time is long overdue for him to be freed.

Over 20 years ago Alaa began using his technical skills to connect coders and technologists in the Middle East to build online communities where people could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s successive repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights.

Alaa is a symbol for the principle of free speech in a region of the world where speaking out for justice and human rights is dangerous and using the power of technology to build community is criminalized. But he has also come to symbolize the oppression and cruelty with which the Egyptian government treats those who dare to speak out against authoritarianism and surveillance.

Egyptian authorities’ relentless, politically motivated pursuit of Alaa is an egregious display of abusive police power and lack of due process. He was first arrested and detained in 2006 for participating in a demonstration. He was arrested again in 2011 on charges related to another protest. In 2013 he was arrested and detained on charges of organizing a protest. He was eventually released in 2014, but imprisoned again after a judge found him guilty in absentia.

What diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family? - David Lammy

That same year he was released on bail, only to be re-arrested when he went to court to appeal his case. In 2015 he was sentenced to five years in prison and released in 2019. But he was re-arrested in a massive sweep of activists in Egypt while on probation and charged with spreading false news and belonging to a terrorist organization for sharing a Facebook post about human rights violations in prison. He was sentenced in 2021, after being held in pre-trial detention for more than two years, to five years in prison. September 29 will mark five years that he has spent behind bars.

While he’s been in prison an anthology of his writing, which was translated into English by anonymous supporters, was published in 2021 as You Have Not Yet Been Defeated, and he became a British citizen through his mother, the rights activist and mathematician Laila Soueif, that December.

Protesting his conditions, Alaa shaved his head and went on hunger strike beginning in April 2022. As he neared the third month of his hunger strike, former UK foreign secretary Liz Truss said she was working hard to secure his release. Similarly, then-PM Rishi Sunak wrote in a letter to Alaa’s sister, Sanaa Seif, that “the government is deeply committed to doing everything we can to resolve Alaa's case as soon as possible."

David Lammy, then a Member of Parliament and now Britain’s foreign secretary, asked Parliament in November 2022, “what diplomatic price has Egypt paid for denying the right of consular access to a British citizen? And will the Minister make clear there will be serious diplomatic consequences if access is not granted immediately and Alaa is not released and reunited with his family?” Lammy joined Alaa’s family during a sit-in outside of the Foreign Office.

When the UK government’s promises failed to come to fruition, Alaa escalated his hunger strike in the runup to the COP27 gathering. At the same time, a coordinated campaign led by his family and supported by a number of international organizations helped draw global attention to his plight, and ultimately led to improved prison conditions and family visitation rights.

But although Alaa’s conditions have improved and his family visitation rights have been secured, he remains wrongfully imprisoned, and his family fears that the Egyptian government has no intention of releasing him.

With Lammy, now UK Foreign Minister, and a new Labour government in place in the UK, there is renewed hope for Alaa’s release. Keir Starmer, Labour Leader and the new prime minister, has voiced his support for Fattah’s release.

The new government must make good on its pledge to defend British values and interests, and advocate for the release of its British citizen Alaa Fattah. We encourage British citizens to write to their MP (external link) and advocate for his release. His continued detention is debased. Egypt should face the sole of shoes around the world until Fattah is freed.

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime”
has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For
an in-depth analysis, please read further.

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized
our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in
comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The
statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries,
said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Weak Human Rights Protections: Why You Should Hate the Proposed UN Cybercrime Treaty

The proposed UN Cybercrime Convention dangerously undermines human rights, opening the door to unchecked cross-border surveillance and government overreach. Despite two and a half years of negotiations, the draft treaty authorizes extensive surveillance powers without robust safeguards, omitting essential data protection principles.

This risks turning international efforts to fight cybercrime into tools for human rights abuses and transnational repression.

Safeguards like prior judicial authorization call for a judge's approval of surveillance before it happens, ensuring the measure is legitimate, necessary and proportionate. Notifying individuals when their data is being accessed gives them an opportunity to challenge requests that they believe are disproportionate or unjustified.

Additionally, requiring states to publish statistical transparency reports can provide a clear overview of surveillance activities. These safeguards are not just legal formalities; they are vital for upholding the integrity and legitimacy of law enforcement activities in a democratic society.¸

Unfortunately the draft treaty is severely lacking in these protections. An article in the current draft about conditions and safeguards is vaguely written,
permitting countries to apply safeguards only "where appropriate," and making them dependent on States domestic laws, some of which have weak human rights protections.¸This means that the level of protection against abusive surveillance and data collection can vary widely based on each country's discretion.

Extensive surveillance powers must be reined in and strong human rights protections added. Without those changes, the proposed treaty unacceptably endangers human rights around the world and should not be approved.

Check out our
two detailed analyses about the lack of human rights safeguards in the draft treaty. 

Why You Should Hate the Proposed UN Cybercrime Treaty

International UN treaties aren’t usually on users’ radar. They are debated, often over the course of many years, by diplomats and government functionaries in Vienna or New York, and their significance is often overlooked or lost in the flood of information and news we process every day, even when they expand police powers and threaten the fundamental rights of people all over the world.

Such is the case with the proposed UN Cybercrime Treaty. For more than two years, EFF and its international civil society partners have been deeply involved in spreading the word about, and fighting to fix, seriously dangerous flaws in the draft convention. In the coming days we will publish a series of short posts that cut through the draft’s dense, highly technical text explaining the real-world effects of the convention.

The proposed treaty, pushed by Russia and shepherded by the UN Office on Drugs and Crime, is a proposed agreement between nations purportedly aimed at strengthening cross border investigations and prosecutions of cybercriminals who spread malware, steal data for ransom, and cause data breaches, among other offenses.

The problem is, as currently written, the treaty gives governments massive surveillance and data collection powers to go after not just cybercrime, but any offense they define as a serious that involves the use of a computer or communications system. In some countries, that includes criticizing the government in a social media post, expressing support online for LGBTQ+ rights, or publishing news about protests or massacres.

Tech companies and their overseas staff, under certain treaty provisions, would be compelled to help governments in their pursuit of people’s data, locations, and communications, subject to domestic jurisdictions, many of which establish draconian fines.

We have called the draft convention a blank check for surveillance abuse that can be used as a tool for human rights violations and transnational repression. It’s an international treaty that everyone should know and care about because it threatens the rights and freedoms of people across the globe. Keep an eye out for our posts explaining how.

For our key concerns, read our three-pager:

Briefing: Negotiating States Must Address Human Rights Risks in the Proposed UN Surveillance Treaty

At a virtual briefing today, experts from the Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media outlined the human rights risks posed by the proposed UN Cybercrime Treaty. They explained that the draft convention, instead of addressing core cybercrimes, is an extensive surveillance treaty that imposes intrusive domestic spying measures with little to no safeguards protecting basic rights. UN Member States are scheduled to hold a final round of negotiations about the treaty's text starting July 29.

If left as is, the treaty risks becoming a powerful tool for countries with poor human rights records that can be used against journalists, dissenters, and every day people. Watch the briefing here:

 

play
Privacy info. This embed will serve content from youtube.com

Media Briefing: EFF, Partners Warn UN Member States Are Poised to Approve Dangerous International Surveillance Treaty

Countries That Believe in Rule of Law Must Push Back on Draft That Expands Spying Powers, Benefiting Authoritarian Regimes

SAN FRANCISCO—On Wednesday, July 24, at 11:00 am Eastern Time (8:00 am Pacific Time, 5:00 pm CET), experts from Electronic Frontier Foundation (EFF), Access Now, Derechos Digitales, Human Rights Watch, and the International Fund for Public Interest Media will brief reporters about the imminent adoption of a global surveillance treaty that threatens human rights around the world, potentially paving the way for a new era of transnational repression.

The virtual briefing will update members of the media ahead of the United Nations’ concluding session of treaty negotiations, scheduled for July 29-August 9 in New York, to possibly finalize and adopt what started out as a treaty to combat cybercrime.

Despite
repeated warnings and recommendations by human rights organizations, journalism and industry groups, cybersecurity experts, and digital rights defenders to add human rights safeguards and rein in the treaty’s broad scope and expansive surveillance powers, UN Member States are expected to adopt the Russian-backed, deeply flawed draft.

The experts will discuss the draft treaty in terms of shifts in geopolitical power, abuse of cybercrime laws, and challenges posed by the rising influence of Russia and China. A question-and-answer session will follow speaker presentations.  

WHAT:
Virtual media briefing on UN surveillance treaty

HOW:
To join the news conference remotely, please register from the following link to receive the webinar ID and password:
https://eff.zoom.us/meeting/register/tZwkd-GsrzoiH9Jt3gsl2CJ55Xv0hBDguxW5

SPEAKERS:
Tirana Hassan, Executive Director, Human Rights Watch
Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales
Khadija Patel, Journalist in Residence, International Fund for Public Interest Media
Katitza Rodriguez, Policy Director for Global Policy, EFF
Moderator: Raman Jit Singh Chima, Global Cybersecurity Lead and Senior International Counsel, Access Now

WHEN:
Wednesday, July 24, at 11:00 am Eastern Time, 8:00 am Pacific Time, 5:00 pm CET

For EFF’s submissions and Coalition Letters to UN Ad Hoc Committee overseeing treaty negotiations:
https://www.eff.org/pages/submissions#main-content

Contact: 
Karen
Gullo
Senior Writer for Free Speech and Privacy
Deborah
Brown
Senior Researcher and Advocate on Technology and Rights, Human Rights Watch

EFF, International Partners Appeal to EU Delegates to Help Fix Flaws in Draft UN Cybercrime Treaty That Can Undermine EU's Data Protection Framework

With the final negotiating session to approve the UN Cybercrime Treaty just days away, EFF and 21 international civil society organizations today urgently called on delegates from EU states and the European Commission to push back on the draft convention's many flaws, which include an excessively broad scope that will grant intrusive surveillance powers without robust human rights and data protection safeguards.

The time is now to demand changes in the text to narrow the treaty's scope, limit surveillance powers, and spell out data protection principles. Without these fixes, the draft treaty stands to give governments' abusive practices the veneer of international legitimacy and should be rejected.

Letter below:

Urgent Appeal to Address Critical Flaws in the Latest Draft of the UN Cybercrime Convention


Ahead of the reconvened concluding session of the United Nations (UN) Ad Hoc Committee on Cybercrime (AHC) in New York later this month, we, the undersigned organizations, wish to urgently draw your attention to the persistent critical flaws in the latest draft of the UN cybercrime convention (hereinafter Cybercrime Convention or the Convention).

Despite the recent modifications, we continue to share profound concerns regarding the persistent shortcomings of the present draft and we urge member states to not sign the Convention in its current form.

Key concerns and proposals for remedy:

  1. Overly Broad Scope and Legal Uncertainty:

  • The draft Convention’s scope remains excessively broad, including cyber-enabled offenses and other content-related crimes. The proposed title of the Convention and the introduction of the new Article 4 – with its open-ended reference to “offenses established in accordance with other United Nations conventions and protocols” – creates significant legal uncertainty and expands the scope to an indefinite list of possible crimes to be determined only in the future. This ambiguity risks criminalizing legitimate online expression, having a chilling effect detrimental to the rule of law. We continue to recommend narrowing the Convention’s scope to clearly defined, already existing cyber-dependent crimes only, to facilitate its coherent application, ensure legal certainty and foreseeability and minimize potential abuse.
  • The draft Convention in Article 18 lacks clarity concerning the liability of online platforms for offenses committed by their users. The current draft of the Article lacks the requirement of intentional participation in offenses established in accordance with the Convention, thereby also contradicting Article 19 which does require intent. This poses the risk that online intermediaries could be held liable for information disseminated by their users, even without actual knowledge or awareness of the illegal nature of the content (as set out in the EU Digital Services Act), which will incentivise overly broad content moderation efforts by platforms to the detriment of freedom of expression. Furthermore, the wording is much broader (“for participation”) than the Budapest Convention (“committed for the cooperation’s benefit”) and would merit clarification along the lines of paragraph 125 of the Council of Europe Explanatory Report to the Budapest Convention
  • The proposal in the revised draft resolution to elaborate a draft protocol supplementary to the Convention represents a further push to expand the scope of offenses, risking the creation of a limitlessly expanding, increasingly punitive framework.
  1. Insufficient Protection for Good-Faith Actors:

  • The draft Convention fails to incorporate language sufficient to protect good-faith actors, such as security researchers (irrespective of whether it concerns the authorized testing or protection of an information and communications technology system), whistleblowers, activists, and journalists, from excessive criminalization. It is crucial that the mens rea element in the provisions relating to cyber-dependent crimes includes references to criminal intent and harm caused.
  1. Lack of Specific Human Rights Safeguards:

  • Article 6 fails to include specific human rights safeguards – as proposed by civil society organizations and the UN High Commissioner for Human Rights – to ensure a common understanding among Member States and to facilitate the application of the treaty without unlawful limitation of human rights or fundamental freedoms. These safeguards should be: 
    • applicable to the entire treaty to ensure that cybercrime efforts provide adequate protection for human rights;
    • be in accordance with the principles of legality, necessity, and proportionality, non-discrimination, and legitimate purpose;
    • incorporate the right to privacy among the human rights specified;
    • address the lack of effective gender mainstreaming to ensure the Convention does not undermine human rights on the basis of gender.
  1. Procedural Measures and Law Enforcement:

  • The Convention should limit the scope of procedural measures to the investigation of the criminal offenses set out in the Convention, in line with point 1 above.
  • In order to facilitate their application and – in light of their intrusiveness – to minimize the potential for abuse, this chapter of the Convention should incorporate the following minimal conditions and safeguards as established under international human rights law. Specifically, the following should be included in Article 24:
    • the principles of legality, necessity, proportionality, non-discrimination and legitimate purpose;
    • prior independent (judicial) authorization of surveillance measures and monitoring throughout their application;
    • adequate notification of the individuals concerned once it no longer jeopardizes investigations;
    • and regular reports, including statistical data on the use of such measures.
  • Articles 28/4, 29, and 30 should be deleted, as they include excessive surveillance measures that open the door for interference with privacy without sufficient safeguards as well as potentially undermining cybersecurity and encryption.
  1. International Cooperation:

  • The Convention should limit the scope of international cooperation solely to the crimes set out in the Convention itself to avoid misuse (as per point 1 above.) Information sharing for law enforcement cooperation should be limited to specific criminal investigations with explicit data protection and human rights safeguards.
  • Article 40 requires “the widest measure of mutual legal assistance” for offenses established in accordance with the Convention as well as any serious offense under the domestic law of the requesting State. Specifically, where no treaty on mutual legal assistance applies between State Parties, paragraphs 8 to 31 establish extensive rules on obligations for mutual legal assistance with any State Party with generally insufficient human rights safeguards and grounds for refusal. For example, paragraph 22 sets a high bar of ”substantial grounds for believing” for the requested State to refuse assistance.
  • When State Parties cannot transfer personal data in compliance with their applicable laws, such as the EU data protection framework, the conflicting obligation in Article 40 to afford the requesting State “the widest measure of mutual legal assistance” may unduly incentivize the transfer of the personal data subject to appropriate conditions under Article 36(1)(b), e.g. through derogations for specific situations in Article 38 of the EU Law Enforcement Directive. Article 36(1)(c) of the Convention also encourages State Parties to establish bilateral and multilateral agreements to facilitate the transfer of personal data, which creates a further risk of undermining the level of data protection guaranteed by EU law.
  • When personal data is transferred in full compliance with the data protection framework of the requested State, Article 36(2) should be strengthened to include clear, precise, unambiguous and effective standards to protect personal data in the requesting State, and to avoid personal data being further processed and transferred to other States in ways that may violate the fundamental right to privacy and data protection.

Conclusion and Call to Action:

Throughout the negotiation process, we have repeatedly pointed out the risks the treaty in its current form pose to human rights and to global cybersecurity. Despite the latest modifications, the revised draft fails to address our concerns and continues to risk making individuals and institutions less safe and more vulnerable to cybercrime, thereby undermining its very purpose.

Failing to narrow the scope of the whole treaty to cyber-dependent crimes, to protect the work of security researchers, human rights defenders and other legitimate actors, to strengthen the human rights safeguards, to limit surveillance powers, and to spell out the data protection principles will give governments’ abusive practices a veneer of international legitimacy. It will also make digital communications more vulnerable to those cybercrimes that the Convention is meant to address. Ultimately, if the draft Convention cannot be fixed, it should be rejected. 

With the UN AHC’s concluding session about to resume, we call on the delegations of the Member States of the European Union and the European Commission’s delegation to redouble their efforts to address the highlighted gaps and ensure that the proposed Cybercrime Convention is narrowly focused in its material scope and not used to undermine human rights nor cybersecurity. Absent meaningful changes to address the existing shortcomings, we urge the delegations of EU Member States and the EU Commission to reject the draft Convention and not advance it to the UN General Assembly for adoption.

This statement is supported by the following organizations:

Access Now
Alternatif Bilisim
ARTICLE 19: Global Campaign for Free Expression
Centre for Democracy & Technology Europe
Committee to Protect Journalists
Digitalcourage
Digital Rights Ireland
Digitale Gesellschaft
Electronic Frontier Foundation (EFF)
epicenter.works
European Center for Not-for-Profit Law (ECNL) 
European Digital Rights (EDRi)
Global Partners Digital
International Freedom of Expression Exchange (IFEX)
International Press Institute 
IT-Pol Denmark
KICTANet
Media Policy Institute (Kyrgyzstan)
Privacy International
SHARE Foundation
Vrijschrift.org
World Association of News Publishers (WAN-IFRA)
Zavod Državljan D (Citizen D)





NETMundial+10 Multistakeholder Statement Pushes for Greater Inclusiveness in Internet Governance Processes

A new statement about strengthening internet governance processes emerged from the NETMundial +10 meeting in Brazil last month, strongly reaffirming the value of and need for a multistakeholder approach involving full and balanced participation of all parties affected by the internet—from users, governments, and private companies to civil society, technologists, and academics.

But the statement did more than reiterate commitments to more inclusive and fair governance processes. It offered recommendations and guidelines that, if implemented, can strengthen multistakeholder principles as the basis for global consensus-building and democratic governance, including in existing multilateral internet policymaking efforts.


The event and statement, to which EFF contributed with dialogue and recommendations, is a follow-up to the 2014 NETMundial meeting, which ambitiously sought to consolidate multistakeholder processes to internet governance and recommended
10 process principles. It’s fair to say that over the last decade, it’s been an uphill battle turning words into action.

Achieving truly fair and inclusive multistakeholder processes for internet governance and digital policy continues to face many hurdles.  Governments, intergovernmental organizations, international standards bodies, and large companies have continued to wield their resources and power. Civil society
  organizations, user groups, and vulnerable communities are too often sidelined or permitted only token participation.

Governments often tout multistakeholder participation, but in practice, it is a complex task to achieve. The current Ad Hoc Committee negotiations of the proposed
UN Cybercrime Treaty highlight the complexity and controversy of multistakeholder efforts. Although the treaty negotiation process was open to civil society and other nongovernmental organizations (NGOs), with positive steps like tracking changes to amendments, most real negotiations occur informally, excluding NGOs, behind closed doors.

This reality presents a stark contrast and practical challenge for truly inclusive multistakeholder participation, as the most important decisions are made without full transparency and broad input. This demonstrates that, despite the appearance of inclusivity, substantive negotiations are not open to all stakeholders.

Consensus building is another important multistakeholder goal but faces significant practical challenges because of the human rights divide among states in multilateral processes. For example, in the context of the Ad Hoc Committee, achieving consensus has remained largely unattainable because of stark differences in human rights standards among member States. Mechanisms for resolving conflicts and enabling decision-making should consider human rights laws to indicate redlines. In the UN Cybercrime Treaty negotiations, reaching consensus could potentially lead to a race to the bottom in human rights and privacy protections.

To be sure, seats at the policymaking table must be open to all to ensure fair representation. Multi-stakeholder participation in multilateral processes allows, for example, civil society to advocate for more human rights-compliant outcomes. But while inclusivity and legitimacy are essential, they alone do not validate the outcomes. An open policy process should always be assessed against the specific issue it addresses, as not all issues require global regulation or can be properly addressed in a specific policy or governance venue.

The
NETmundial+10 Multistakeholder Statement, released April 30 following a two-day gathering in São Paulo of 400 registered participants from 60 countries, addresses issues that have prevented stakeholders, especially the less powerful, from meaningful participation, and puts forth guidelines aimed at making internet governance processes more inclusive and accessible to diverse organizations and participants from diverse regions.

For example, the 18-page statement contains recommendations on how to strengthen inclusive and diverse participation in multilateral processes, which includes State-level policy making and international treaty negotiations. Such guidelines can benefit civil society participation in, for example, the UN Cybercrime Treaty negotiations. EFF’s work with international allies in the UN negotiating process is outlined here.

The NETmundial statement takes asymmetries of power head on, recommending that governance processes provide stakeholders with information and resources and offer capacity-building to make these processes more accessible to those from developing countries and underrepresented communities. It sets more concrete guidelines and process steps for multistakeholder collaboration, consensus-building, and decision-making, which can serve as a roadmap in the internet governance sphere.

The statement also recommends strengthening the UN-convened Internet Governance Forum (IGF), a predominant venue for the frank exchange of ideas and multistakeholder discussions about internet policy issues. The multitude of initiatives and pacts around the world dealing with internet policy can cause duplication, conflicting outcomes, and incompatible guidelines, making it hard for stakeholders, especially those from the Global South, to find their place. 


The IGF could strengthen its coordination and information sharing role and serve as a venue for follow up of multilateral digital policy agreements. The statement also recommended improvements in the dialogue and coordination between global, regional, and national IGFs to establish continuity between them and bring global attention to local perspectives.

We were encouraged to see the statement recommend that IGF’s process for selecting its host country be transparent and inclusive and take into account human rights practices to create equitable conditions for attendance.

EFF and 45 digital and human rights organizations last year called on the UN Secretary-General and other decision-makers to reverse their decision to grant host status for the 2024 IGF to Saudi Arabia, which has a long history of human rights violations, including the persecution of human and women’s rights defenders, journalists, and online activists. Saudi Arabia’s draconian cybercrime laws are a threat to the safety of civil society members who might consider attending an event there.  

EFF Zine on Surveillance Tech at the Southern Border Shines Light on Ever-Growing Spy Network

Guide Features Border Tech Photos, Locations, and Explanation of Capabilities

SAN FRANCISCO—Sensor towers controlled by AI, drones launched from truck-bed catapults, vehicle-tracking devices disguised as traffic cones—all are part of an arsenal of technologies that comprise the expanding U.S surveillance strategy along the U.S.-Mexico border, revealed in a new EFF zine for advocates, journalists, academics, researchers, humanitarian aid workers, and borderland residents.

Formally released today and available for download online in English and Spanish, “Surveillance Technology at the U.S.-Mexico Border” is a 36-page comprehensive guide to identifying the growing system of surveillance towers, aerial systems, and roadside camera networks deployed by U.S.-law enforcement agencies along the Southern border, allowing for the real-time tracking of people and vehicles.

The devices and towers—some hidden, camouflaged, or moveable—can be found in heavily populated urban areas, small towns, fields, farmland, highways, dirt roads, and deserts in California, Arizona, New Mexico, and Texas.

The zine grew out of work by EFF’s border surveillance team, which involved meetings with immigrant rights groups and journalists, research into government procurement documents, and trips to the border. The team located, studied, and documented spy tech deployed and monitored by the Department of Homeland Security (DHS), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), National Guard, and Drug Enforcement Administration (DEA), often working in collaboration with local law enforcement agencies.

“Our team learned that while many people had an abstract understanding of the so-called ‘virtual wall,’ the actual physical infrastructure was largely unknown to them,” said EFF Director of Investigations Dave Maass. “In some cases, people had seen surveillance towers, but mistook them for cell phone towers, or they’d seen an aerostat flying in the sky and not known it was part of the U.S. border strategy.

“That's why we put together this zine; it serves as a field guide to spotting and identifying the large range of technologies that are becoming so ubiquitous that they are almost invisible,” said Maass.

The zine also includes a copy off EFF’s pocket guide to crossing the U.S. border and protecting information on smart phones, computers, and other digital devices.

The zine is available for republication and remixing under EFF’s Creative Commons Attribution License and features photography by Colter Thomas and Dugan Meyer, whose exhibit “Infrastructures of Control,”—which incorporates some of EFF’s border research—opened in April at the University of Arizona. EFF has previously released a gallery of images of border surveillance that are available for publications to reuse, as well as a living map of known surveillance towers that make up the so-called “virtual wall.”

To download the zine:
https://www.eff.org/pages/zine-surveillance-technology-us-mexico-border

For more on border surveillance:
https://www.eff.org/issues/border-surveillance-technology

For EFF’s searchable Atlas of Surveillance:
https://atlasofsurveillance.org/ 

 

Contact: 
Dave
Maass
Director of Investigations

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists

In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk. 

The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years.

The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.  

The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean. 

In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment. 

Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists

In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights. 

The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below.

Prior Judicial Order for Communications Surveillance and Access to Data

The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing thatthe mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.” 

Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that

“[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added) 

According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”  

The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter.

The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order.

Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.” 

To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure. 

Informational Self-Determination Recognized as an Autonomous Human Right 

In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention. 

“In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”  

The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases. 

The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law. 

According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.  

The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files.

The Vital Protection of the Right to Defend Human Rights

The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights.

For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities. 

Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State. 

Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices 

In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities. 

In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights. 

As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records. 

The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures. 

The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files. 

These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out.

In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations. 

The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.  

When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 

Ola Bini Faces Ecuadorian Prosecutors Seeking to Overturn Acquittal of Cybercrime Charge

Ola Bini, the software developer acquitted last year of cybercrime charges in a unanimous verdict in Ecuador, was back in court last week in Quito as prosecutors, using the same evidence that helped clear him, asked an appeals court to overturn the decision with bogus allegations of unauthorized access of a telecommunications system.

Armed with a grainy image of a telnet session—which the lower court already ruled was not proof of criminal activity—and testimony of an expert witness to the lower court—who never had access to the devices and systems involved in the alleged intrusion—prosecutors presented the theory that, by connecting to a router, Bini made partial unauthorized access in an attempt to break into a  system  provided by Ecuador’s national telecommunications company (CNT) to a presidency's
contingency center.

If this all sounds familiar, that’s because it is. In an unfounded criminal case plagued by irregularities, delays, and due process violations, Ecuadorian prosecutors have for the last five years sought to prove Bini violated the law by allegedly accessing an information system without authorization.

Bini, who resides in Ecuador, was arrested at the Quito airport in 2019 without being told why. He first learned about the charges from a TV news report depicting him as a criminal trying to destabilize the country. He spent 70 days in jail and cannot leave Ecuador or use his bank accounts.

Bini prevailed in a trial last year before a three-judge panel. The core evidence the Prosecutor’s Office and CNT’s lawyer presented to support the accusation of unauthorized access to a computer, telematic, or telecommunications system was a printed image of a telnet session allegedly taken from Bini’s mobile phone.

The image shows the user requesting a telnet connection to an open server using their computer’s command line. The open server warns that unauthorized access is prohibited and asks for a username. No username is entered. The connection then times out and closes. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network system, it shows the trail of someone who paid a visit to a publicly accessible server—and then politely obeyed the server's warnings about usage and access.

Bini’s acquittal was a major victory for him and the work of security researchers. By assessing the evidence presented, the court concluded that both the Prosecutor’s Office and CNT failed to demonstrate a crime had occurred. There was no evidence that unauthorized access had ever happened, nor anything to sustain the malicious intent that article 234 of Ecuador’s Penal Code requires to characterize the offense of unauthorized access.

The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and found that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, do not constitute evidence of cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge, or what can't be verified from digital forensics, is not proper digital evidence.

Prosecutors appealed the verdict and are back in court using the same image that didn’t prove any crime was committed. At the March 26 hearing, prosecutors said their expert witness’s analysis of the telnet image shows there was connectivity to the router. The witness compared it to entering the yard of someone’s property to see if the gate to the property is open or closed. Entering the yard is analogous to connecting to the router, the witness said.

Actually, no.
Our interpretation of the image, which was leaked to the media before Bini’s trial, is that it’s the internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and walking away. If this image could prove anything it is that no unauthorized access happened.

Yet, no expert analysis was conducted in the systems allegedly affected. The  expert witness’s testimony was based on his analysis of a CNT report—he didn’t have access to the CNT router to verify its configuration. He didn’t digitally validate whether what was shown in the report actually happened and he was never asked to verify the existence of an IP address owned or managed by CNT.

That’s not the only problem with the appeal proceedings. Deciding the appeal is a panel of three judges, two of whom ruled to keep Bini in detention after his arrest in 2019 because there were allegedly sufficient elements to establish a suspicion against him. The detention was later considered illegal and arbitrary because of a lack of such elements. Bini filed a lawsuit against the Ecuadorian state, including the two judges, for violating his rights. Bini’s defense team has sought to remove these two judges from the appeals case, but his requests were denied.

The appeals court panel is expected to issue a final ruling in the coming days.  

Lawmakers: Ban TikTok to Stop Election Misinformation! Same Lawmakers: Restrict How Government Addresses Election Misinformation!

In a case being heard Monday at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can't even pass on information about websites state election officials have identified as disinformation, even if they don't request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government's interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

On Monday, the Supreme Court will hear oral arguments in Murthy v. Missouri, a case that raises the issue of whether the federal government violates the First Amendment by asking social media platforms to remove or negatively moderate user posts or accounts. In Murthy, the government contends that it can strongly urge social media sites to remove posts without violating the First Amendment, as long as it does not coerce them into doing so under the threat of penalty or other official sanction.

We recognize both the hazards of government involvement in content moderation and the proper role in some situations for the government to share its expertise with the platforms. In our brief in Murthy, we urge the court to adopt a view of coercion that includes indirectly coercive communications designed and reasonably perceived as efforts to replace the platform’s editorial decision-making with the government’s.

And we argue that close cases should go against the government. We also urge the court to recognize that the government may and, in some cases, should appropriately inform platforms of problematic user posts. But it’s the government’s responsibility to make sure that its communications with the platforms are reasonably perceived as being merely informative and not coercive.

In contrast, the Members of Congress signed an amicus brief in Murthy supporting placing strict limitations on the government’s interactions with social media companies. They argued that the government may hardly communicate at all with social media platforms when it detects problematic posts.

Notably, the specific posts they discuss in their brief include, among other things, posts the U.S. government suspects are foreign election interference. For example, the case includes allegations about the FBI and CISA improperly communicating with social media sites that boil down to the agency passing on pertinent information, such as websites that had already been identified by state and local election officials as disinformation. The FBI did not request that any specific action be taken and sought to understand how the sites' terms of service would apply.

As we argued in our amicus brief, these communications don't add up to the government dictating specific editorial changes it wanted. It was providing information useful for sites seeking to combat misinformation. But, following an injunction in Murthy, the government has ceased sharing intelligence about foreign election interference. Without the information, Meta reports its platforms could lack insight into the bigger threat picture needed to enforce its own rules.

The problem of election misinformation on social media also played a prominent role this past week when the U.S. House of Representatives approved a bill that would bar app stores from distributing TikTok as long as it is owned by its current parent company, ByteDance, which is headquartered in Beijing. The bill also empowers the executive branch to identify and similarly ban other apps that are owned by foreign adversaries.

As stated in the House Report that accompanied the so-called "Protecting Americans from Foreign Adversary Controlled Applications Act," the law is needed in part because members of Congress fear the Chinese government “push[es] misinformation, disinformation, and propaganda on the American public” through the platform. Those who supported the bill thus believe that the U.S. can take the drastic step of banning an app for the purposes of preventing the spread of “misinformation and propaganda” to U.S. users. A public report from the Office of the Director for National Intelligence was more specific about the threat, indicating a special concern for information meant to interfere with the November elections and foment societal divisions in the U.S.

Over 30 members of the House who signed the amicus brief in Murthy voted for the TikTok ban. So, many of the same people who supported the U.S. government’s efforts to rid a social media platform of foreign misinformation, also argued that the government’s ability to address the very same content on other social media platforms should be sharply limited.

Admittedly, there are significant differences between the two positions. The government does have greater limits on how it regulates the speech of domestic companies than it does the speech of foreign companies.

But if the true purpose of the bill is to get foreign election misinformation off of social media, the inconsistency in the positions is clear.  If ByteDance sells TikTok to domestic owners so that TikTok can stay in business in the U.S., and if the same propaganda appears on the site, is the U.S. now powerless to do anything about it? If so, that would seem to undercut the importance in getting the information away from U.S. users, which is one the chief purposes of the TikTik ban.

We believe there is an appropriate role for the government to play, within the bounds of the First Amendment, when it truly believes that there are posts designed to interfere with U.S. elections or undermine U.S. security on any social media platform. It is a far more appropriate role than banning a platform altogether.

 

 

Location Data Tracks Abortion Clinic Visits. Here’s What to Know

Our concerns about the selling and misuse of location data for those seeking reproductive and gender healthcare are escalating amid a recent wave of cases and incidents demonstrating that the digital trail we leave is being used by anti-abortion activists.

The good news is some
states and tech companies are taking steps to better protect location data privacy, including information that endangers people needing or seeking information about reproductive and gender-affirming healthcare. But we know more must be done—by pharmacies, our email providers, and lawmakers—to plug gaping holes in location data protection.

Location data is
highly sensitive, as it paints a picture of our daily lives—where we go, who we visit, when we seek medical care, or what clinics we visit. That’s what makes it so attractive to data brokers and law enforcement in states outlawing abortion and gender-affirming healthcare and those seeking to exploit such data for ideological or commercial purposes.

What we’re seeing is deeply troubling. Sen. Ron
Wyden recenty disclosed that vendor Near Intelligence allegedly gathered location data of people’s visits to nearly 600 Planned Parenthood locations across 48 states, without consent. It sold that data to an anti-abortion group, which used it in a massive anti-abortion ad campaign.The Wisconsin-based group used the geofenced data to send mobile ads to people who visited the clinics.

It’s hardly a leap to imagine that law enforcement and bounty hunters in anti-abortion states would gladly buy the same data to find out who is visiting Planned Parenthood clinics and try to charge and imprison women, their families, doctors, and caregivers. That’s the real danger of an unregulated data broker industry; anyone can buy what’s gathered from warrantless surveillance, for whatever nefarious purpose they choose.

For example, police in Idaho, where abortion is illegal,
used cell phone data in an investigation against an Idaho woman and her son charged with kidnapping. The data showed that they had taken the son’s minor girlfriend to Oregon, where abortion is legal, to obtain an abortion.

The exploitation of location data is not the only problem. Information about prescription medicines we take is not protected against law enforcement requests. The nation’s eight largest pharmacy chains, including CVS, Walgreens, and Rite Aid, have routinely turned over
prescription records of thousands of Americans to law enforcement agencies or other government entities secretly without a warrant, according to a congressional inquiry.

Many people may not know that their prescription records can be obtained by law enforcement without too much trouble. There’s not much standing between someone’s self-managed abortion medication and a law enforcement records demand. In April the U.S. Health and Human Services Department proposed a
rule that would prevent healthcare providers and insurers from giving information to state officials trying to prosecute some seeking or providing a legal abortion. A final rule has not yet been published.

Exploitation of location and healthcare data to target communities could easily expand to other groups working to protect bodily autonomy, especially those most likely to suffer targeted harassment and bigotry. With states
passing and proposing bills restricting gender-affirming care and state law enforcement officials pursuing medical records of transgender youth across state lines, it’s not hard to imagine them buying or using location data to find people to prosecute.

To better protect people against police access to sensitive health information, lawmakers in a few states have taken action. In 2022, California
enacted two laws protecting abortion data privacy and preventing California companies from sharing abortion data with out-of-state entities.

Then, last September the state enacted a
shield law prohibiting California-based companies, including social media and tech companies, from disclosing patients’ private communications regarding healthcare that is legally protected in the state.

Massachusetts lawmakers have proposed the
Location Shield Act, which would prohibit the sale of cellphone location information to data brokers. The act would make it harder to trace the path of those traveling to Massachusetts for abortion services.

Of course, tech companies have a huge role to play in location data privacy. EFF was glad when Google said in 2022 it would delete users’ location history for visits to medical facilities, including abortion clinics and counseling and fertility centers. Google pledged that when the location history setting on a device was turned on, it would delete entries for particularly personal places like reproductive health clinics soon after such a visit.

But a
study by AccountableTech testing Google’s pledge said the company wasn’t living up to its promises and continued to collect and retain location data from individuals visiting abortion clinics. Accountable Tech reran the study in late 2023 and the results were again troubling—Google still retained location search query data for some visits to Planned Parenthood clinics. It appears users will have to manually delete location search history to remove information about the routes they take to visiting sensitive locations. It doesn’t happen automatically.

Late last year, Google announced
plans to move saved Timeline entries in Google Maps to users’ devices. Users who want to keep the entries could choose to back up the data to the cloud, where it would be automatically encrypted and out of reach even to Google.

These changes would
appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. But when these features are coming is uncertain—though Google said in December they’re “coming soon.”

Google should implement the changes sooner as opposed to later. In the meantime, those seeking reproductive and gender information and healthcare can
find tips on how to protect themselves in our Surveillance Self Defense guide. 

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious
exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people
globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirementincluding an intent to cause damage, defraud, or harmis needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for
unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)a minor infraction ordinarily considered a civil issuecould be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as c
hanging one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblockingwhich can be done innocently simply by changing locationshould not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers
whether within organizations or independentdiscover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of
Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)

Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)
Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl,
Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundacin Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

Protecting Students from Faulty Software and Legislation: 2023 Year in Review

Lawmakers, schools districts, educational technology companies and others keep rolling out legislation and software that threatens students’ privacy, free speech, and access to social media, in the name of “protecting” children. At EFF, we fought back against this overreach and demand accountability and transparency.

Bad bills and invasive monitoring systems, though sometimes well-meaning, hurt students rather than protect them from the perceived dangers of the internet and social media. We saw many efforts to bar young people, and students, from digital spaces, censor what they are allowed to see and share online, and monitor and control when and how they can do it. This makes it increasingly difficult for them to access information about everything from gun violence and drug abuse to politics and LGBTQ+ topics, all because some software or elected official considers these topics “harmful.”

In response, we doubled down on exposing faulty surveillance software, long a problem in many schools across the country. We launched a new project called the Red Flag Machine, an interactive quiz and report demonstrating the absurd inefficiency—and potential dangers—of student surveillance software that schools across the country use and that routinely invades the privacy of millions of children.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back

The project grew out of our investigation of GoGuardian, computer monitoring software used in about 11,500 schools to surveil about 27 million students—mostly in middle and high school—according to the company. The software allows school officials and teachers to monitor student’s computers and devices, talk to them via chat or webcam, block sites considered “offensive,” and get alerts when students access content that the software, or the school, deems harmful or explicit.

Our investigation showed that the software inaccurately flags massive amounts of useful material. The software flagged sites about black authors and artists, the Holocaust, and the LGBTQ+ rights movement. The software flagged the official Marine Corps’ fitness guide and the bios of the cast of Shark Tank. Bible.com was flagged because the text of Genesis 3 contained the word “naked.” We found thousands more examples of mis-flagged sites.

EFF built the Red Flag Machine to expose the ludicrous results of GoGuardian’s flagging algorithm. In addition to reading our research about the software, you can take a quiz that presents websites flagged by the software, and guess which of five possible words triggered the flag. The results would be funny if they were not so potentially harmful.

Congress Takes Aim At Students and Young People

Meanwhile, Congress this year resurrected the Kids Online Safety Act (KOSA), a bill that would increase surveillance and restrict access to information in the name of protecting children online—including students. KOSA would give power to state attorneys general to decide what content on many popular online platforms is dangerous for young people, and would enable censorship and surveillance. Sites would likely be required to block important educational content, often made by young people themselves, about how to deal with anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal thoughts. We urged Congress to reject this bill and encouraged people to tell their senators and representative that KOSA will censor the internet but not help kids. 

We also called out the brazen Eyes on the Board Act, which aims to end social media use entirely in schools. This heavy-handed bill would cut some federal funding to any school that doesn’t block all social media platforms. We can understand the desire to ensure students are focusing on schoolwork when in class, but this bill tells teachers and school officials how to do their jobs, and enforces unnecessary censorship.

Many schools already don’t allow device use in the classroom and block social media sites and other content on school issued devices. Too much social media is not a problem that teachers and administrators need the government to correct—they already have the tools and know-how to do it.

Unfortunately, we’ve seen a slew of state bills that also seek to control what students and young people can access online. There are bills in Texas, Utah, Arkansas, Florida, Montana, to name just a few, and keeping up with all this bad legislation is like a game of whack a mole.

Finally, teachers and school administrators are grappling with whether generative AI use should be allowed, and if they should deploy detection tools to find students who have used it. We think the answer to both is no. AI detection tools are very inaccurate and carry significant risks of falsely flagging students for plagiarism. And AI use is growing exponentially and will likely have significant impact on students’ lives and futures. They should be learning about and exploring generative AI now to understand some of the benefits and flaws. Demonizing it only deprives students from gaining knowledge about a technology that may change the world around us.

We’ll continue to fight student surveillance and censorship, and we are heartened to see students fighting back against efforts to supposedly protect children that actually give government control over who gets to see what content. It has never been more important for young people to defend our democracy and we’re excited to be joining with them. 

If you’re interested in learning more about protecting your privacy at school, take a look at our Surveillance Self-Defense guide on privacy for students.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

In Landmark Battle Over Free Speech, EFF Urges Supreme Court to Strike Down Texas and Florida Laws that Let States Dictate What Speech Social Media Sites Must Publish

Laws Violate First Amendment Protections that Help Create Diverse Forums for Users’ Free Expression

WASHINGTON D.C.—The Electronic Frontier Foundation (EFF) and five organizations defending free speech today urged the Supreme Court to strike down laws in Florida and Texas that let the states dictate certain speech social media sites must carry, violating the sites’ First Amendment rights to curate content they publish—a protection that benefits users by creating speech forums accommodating their diverse interests, viewpoints, and beliefs.

The court’s decisions about the constitutionality of the Florida and Texas laws—the first laws to inject government mandates into social media content moderation—will have a profound impact on the future of free speech. At stake is whether Americans’ speech on social media must adhere to government rules or be free of government interference.

Social media content moderation is highly problematic, and users are rightly often frustrated by the process and concerned about private censorship. But retaliatory laws allowing the government to interject itself into the process, in any form, raises serious First Amendment, and broader human rights, concerns, said EFF in a brief filed with the National Coalition Against Censorship, the Woodhull Freedom Foundation, Authors Alliance, Fight for The Future, and First Amendment Coalition.

“Users are far better off when publishers make editorial decisions free from government mandates,” said EFF Civil Liberties Director David Greene. “These laws would force social media sites to publish user posts that are at best, irrelevant, and, at worst, false, abusive, or harassing.

“The Supreme Court needs to send a strong message that the government can’t force online publishers to give their favored speech special treatment,” said Greene.

Social media sites should do a better job at being transparent about content moderation and self-regulate by adhering to the Santa Clara Principles on Transparency and Accountability in Content Moderation. But the Principles are not a template for government mandates.

The Texas law broadly mandates that online publishers can’t decline to publish others’ speech based on anyone’s viewpoint expressed on or off the platform, even when that speech violates the sites' rules. Content moderation practices that can be construed as viewpoint-based, which is virtually all of them, are barred by the law. Under it, sites that bar racist material, knowing their users object to it, would be forced to carry it. Sites catering to conservatives couldn’t block posts pushing liberal agendas.

The Florida law requires that social media sites grant special treatment to electoral candidates and “journalistic enterprises” and not apply their regular editorial practices to them, even if they violate the platforms' rules. The law gives preferential treatment to political candidates, preventing publishers at any point before an election from canceling their accounts or downgrading their posts or posts about them, giving them free rein to spread misinformation or post about content outside the site’s subject matter focus. Users not running for office, meanwhile, enjoy no similar privilege.

What’s more, the Florida law requires sites to disable algorithms with respect to political candidates, so their posts appear chronologically in users’ feeds, even if a user prefers a curated feed. And, in addition to dictating what speech social media sites must publish, the laws also place limits on sites' ability to amplify content, use algorithmic ranking, and add commentary to posts.

“The First Amendment generally prohibits government restrictions on speech based on content and viewpoint and protects private publisher ability to select what they want to say,” said Greene. “The Supreme Court should not grant states the power to force their preferred speech on users who would choose not to see it.”

“As a coalition that represents creators, readers, and audiences who rely on a diverse, vibrant, and free social media ecosystem for art, expression, and knowledge, the National Coalition Against Censorship hopes the Court will reaffirm that government control of media platforms is inherently at odds with an open internet, free expression, and the First Amendment,” said Lee Rowland, Executive Director of National Coalition Against Censorship.

“Woodhull is proud to lend its voice in support of online freedom and against government censorship of social media platforms,” said Ricci Joy Levy, President and CEO at Woodhull Freedom Foundation. “We understand the important freedoms that are at stake in this case and implore the Court to make the correct ruling, consistent with First Amendment jurisprudence.”

"Just as the press has the First Amendment right to exercise editorial discretion, social media platforms have the right to curate or moderate content as they choose. The government has no business telling private entities what speech they may or may not host or on what terms," said David Loy, Legal Director of the First Amendment Coalition.

For the brief:
https://www.eff.org/document/eff-brief-moodyvnetchoice

Contact: 
David
Greene
Civil Liberties Director

EFF to Supreme Court: Fifth Amendment Protects People from Being Forced to Enter or Hand Over Cell Phone Passcodes to the Police

Lower Court Ruling Undermining Protections Against Self Incrimination Should Be Reversed

WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF) today asked the Supreme Court to overturn a ruling undermining Fifth Amendment protections against self-incrimination and find that constitutional safeguards prevent police from forcing people to provide or use passcodes for their cell phones so officers can access the tremendous amount of private information on phones.

At stake is the fundamental principle that the government can’t force people to testify against themselves, including by revealing or using their passcodes.

“When the government demands someone turn over or enter their passcode, it is forcing that person to disclose the contents of their mind and provide a link in a chain of possibly incriminating evidence,” said EFF Surveillance Litigation Director Andrew Crocker. “Whenever the government calls on someone to use memorized information to aid in their own prosecution—whether it be a cellphone passcode, a combination to a safe, or even their birthdate—the Fifth Amendment applies.”

The Illinois Supreme Court in the case 
People v. Sneed erroneously ruled that the Fifth Amendment doesn’t apply to compelled entry of passcodes because they are just a string of numbers memorized by the phone’s owner with minimal independent value—and therefore not a form of testimony. The Illinois court erred further by ruling that the passcode at issue fell under the dubious “forgone conclusion exception” to the Fifth Amendment because the government agents already knew it existed and the defendant knew the code.

Federal and state courts are split on whether the Fifth Amendment prohibits police from compelling individuals to unlock their cell phones so prosecutors can look for incriminating evidence, and when and how the “forgone conclusion exception” applies. Only the Supreme Court can resolve this split, EFF said in a brief today.

“The Supreme Court should find that Fifth Amendment protection against self-incrimination extends to the digital age and applies to turning over or entering a passcode,” said EFF Staff Attorney Lisa Femia.* “Cell phones hold an unprecedented amount of our private information and device searches have become routine in law enforcement investigations. It’s imperative that the court make clear that the Fifth Amendment doesn’t allow the government to require people to hand over their passcodes and assist in their own prosecution.”

*Admitted in New York and Washington, D.C., only; not admitted in California

For the brief: https://www.eff.org/document/sneed-v-illinois-eff-brief



Contact: 
Andrew
Crocker
Surveillance Litigation Director

Platforms Must Stop Unjustified Takedowns of Posts By and About Palestinians

Legal intern Muhammad Essa Fasih contributed to this post.

Social media is a crucial means of communication in times of conflict—it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity. Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.

In the weeks since war between Hamas and Israel began,
social media platforms have removed content from or suspended accounts of Palestinian news sites, activists, journalists, students, and Arab citizens in Israel, interfering with the dissemination of news about the conflict and silencing voices expressing concern for Palestinians.

The platforms say some takedowns were caused by security issues, technical glitches, mistakes that have been fixed, or stricter rules meant to reduce hate speech. But users complain of
unexplained removals of posts about Palestine since the October 7 Hamas terrorist attacks.

Meta’s Facebook
shut down the page of independent Palestinian website Quds News Network, a primary source of news for Palestinians with 10 million followers. The network said its Arabic and English news pages had been deleted from Facebook, though it had been fully complying with Meta's defined media standards. Quds News Network has faced similar platform censorship before—in 2017, Facebook censored its account, as did Twitter in 2020.

Additionally, Meta’s
Instagram has locked or shut down accounts with significant followings. Among these are Let’s Talk Palestine, an account with over 300,000 followers that shows pro-Palestinian informative content, and Palestinian media outlet 24M. Meta said the accounts were locked for security reasons after signs that they were compromised.

The account of the news site Mondoweiss was also 
banned by Instagram and taken down on TikTok, later restored on both platforms.

Meanwhile, Instagram, Tiktok, and LinkedIn users sympathetic to or supportive of the plight of Palestinians have
complained of “shadow banning,” a process in which the platform limits the visibility of a user's posts without notifying them. Users say the platform limited the visibility of posts that contained the Palestinian flag.

Meta has
admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules. Responding to a surge in hate speech after Oct.7, the company lowered the threshold for predicting whether comments qualify as harassment or incitement to violence from 80 percent to 25 percent for users in Palestinian territories. Some content creators are using code words and emojis and shifting the spelling of certain words to evade automated filtering. Meta needs to be more transparent about decisions that downgrade users’ speech that does not violate its rules.

For some users, posts have led to more serious consequences. Palestinian citizens of Israel, including well-known singer Dalal Abu Amneh from Nazareth,
have been arrested for social media postings about the war in Gaza that are alleged to express support for the terrorist group Hamas.

Amneh’s case demonstrates a disturbing trend concerning social media posts supporting Palestinians. Amneh’s post of the
Arabic motto “There is no victor but God” and the Palestinian flag was deemed as incitement. Amneh, whose music celebrates Palestinian heritage, was expressing religious sentiment, her lawyer said, not calling for violence as the police claimed.

She
received hundreds of death threats and filed a complaint with Israeli police, only to be taken into custody. Her post was removed. Israeli authorities are treating any expression of support or solidarity with Palestinians as illegal incitement, the lawyer said.

Content moderation does not work at scale even in the best of times, as we have said
repeatedly. At all times, mistakes can lead to censorship; during armed conflicts they can have devastating consequences.

Whether through content moderation or technical glitches, platforms may also unfairly label people and communities. Instagram, for example, inserted the word “terrorist” into the profiles of some Palestinian users when its auto-translation converted the Palestinian flag emoji followed by the Arabic word for “Thank God” into “Palestinian terrorists are fighting for their freedom.” Meta 
apologized for the mistake, blaming it on a bug in auto-translation. The translation is now “Thank God.”

Palestinians have long fought 
private censorship, so what we are seeing now is not particularly new. But it is growing at a time when online speech protections are sorely needed. We call on companies to clarify their rules, including any specific changes that have been made in relation to the ongoing war, and to stop the knee jerk reaction to treat posts expressing support for Palestinians—or notifying users of peaceful demonstrations, or documenting violence and the loss of loved ones—as incitement and to follow their own existing standards to ensure that moderation remains fair and unbiased.

Platforms should also follow the 
Santa Clara Principles on Transparency and Accountability in Content Moderation notify users when, how, and why their content has been actioned, and give them  the opportunity to appeal. We know Israel has worked directly with Facebook, requesting and garnering removal of content it deemed incitement to violence, suppressing posts by Palestinians about human rights abuses during May 2021 demonstrations that turned violent.

The horrific violence and death in Gaza is heartbreaking. People are crying out to the world, to family and friends, to co-workers, religious leaders, and politicians their grief and outrage. Labeling large swaths of this outpouring of emotion by Palestinians as incitement is unjust and wrongly denies people an important outlet for expression and solace.

EFF to Supreme Court: Reverse Dangerous Prior Restraint Ruling Upholding FBI Gag on X’s Surveillance Transparency Report

Ninth Circuit Ruling Gives Government Unilateral Power to Suppress Speech

WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF) urged the Supreme Court today to review and reverse a dangerous ruling allowing the Justice Department to censor X’s ability to publish information about government requests for the platform’s private user data, a decision that undermines at least a hundred years of First Amendment case law on when the government can bar private speech before it is published.

The government has been fighting X, formerly known as Twitter, for nearly a decade to stop it from publishing a 2013 transparency report that would give its users a more complete picture of how many times the government—armed with orders from a secret court and the FBI—demanded customer information for national security surveillance.

Transparency reports have been a bedrock for shining a light on the secretive world of FBI national security letters and Foreign Intelligence Surveillance Act (FISA) orders compelling platforms to turn over their users’ private information. The reports disclose aggregate numbers of data requests—not the underlying targets of government requests or other such details—often by country and type of request, allowing users and researchers to see trends in surveillance activity.

Blocking such data prior to publication—a "prior restraint” in legal terms—has long been subject to the most demanding First Amendment protections to ensure that the government is not allowed to be an unreviewable censor stifling individuals’ right to free speech.

But the U.S. Court of Appeals for the Ninth Circuit, in upholding dismissal of X’s lawsuit challenging DOJ’s decision to block the 2013 transparency report, found that a large body of prior restraint law didn’t apply. The court said that gagging companies from publishing numerical data provided “confidentially” as part of a “legitimate government process” doesn’t “present the grave dangers of a censorship system.”

While the case X v. Garland involves a 2013 report, its potential consequences are immediate and far-reaching. Applying the Ninth Circuit standards, virtually anyone who interacts with a government agency could be gagged from speaking publicly about it.

Individuals who had interactions with law enforcement or border officials—such as someone being interviewed as a witness to a crime or someone subjected to police misconduct—could be barred from telling their family or going to the press. This is wrong, and EFF is urging the Supreme Court to set things right.

“The Ninth Circuit decision marked a new low in judicial deference to government demands for secrecy because of national security,” said EFF Surveillance Litigation Director Andrew Crocker. “The undisputed understanding of prior restraint in First Amendment law is that it’s the least tolerable infringement of free speech rights.”

“If allowed to stand, the appeals court ruling opens the door to government unilaterally blocking speech about matters of public interest, while restricting speakers like X to test such gag orders in court,” said EFF Civil Liberties Director David Greene. “The Supreme Court should step in to salvage the prior restraint doctrine and prevent an unacceptable expansion of the government’s power to censor speech.”

For EFF's brief:
https://www.eff.org/document/eff-brief-twitter-vs-garland

 

Contact: 
Andrew
Crocker
Surveillance Litigation Director
❌