Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

This is the second post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out our detailed analysis on the criminalization of security research activities under the proposed convention.

The United Nations Ad Hoc Committee is just weeks away from finalizing a too-broad Cybercrime Draft Convention. This draft would normalize unchecked domestic surveillance and rampant government overreach, allowing serious human rights abuses around the world.

The latest draft of the convention—originally spearheaded by Russia but since then the subject of two and a half years of negotiations—still authorizes broad surveillance powers without robust safeguards and fails to spell out data protection principles essential to prevent government abuse of power.

As the August 9 finalization date approaches, Member States have a last chance to address the convention’s lack of safeguards: prior judicial authorization, transparency, user notification, independent oversight, and data protection principles such as transparency, minimization, notification to users, and purpose limitation. If left as is, it can and will be wielded as a tool for systemic rights violations.

Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards or reject the treaty altogether. These domestic surveillance powers are critical as they underpin international surveillance cooperation.

EFF’s Advocacy for Human Rights Safeguards

EFF has consistently advocated for human rights safeguards to be a baseline for both the criminal procedural measures and international cooperation chapters. The collection and use of digital evidence can implicate human rights, including privacy, free expression, fair trial, and data protection. Strong safeguards are essential to prevent government abuse.

Regrettably, many states already fall short in these regards. In some cases, surveillance laws have been used to justify overly broad practices that disproportionately target individuals or groups based on their political views—particularly ethnic and religious groups. This leads to the suppression of free expression and association, the silencing of dissenting voices, and discriminatory practices. Examples of these abuses include covert surveillance of internet activity without a warrant, using technology to track individuals in public, and monitoring private communications without legal authorization, oversight, or safeguards.

The Special Rapporteur on the rights to freedom of peaceful assembly and of association has already sounded the alarm about the dangers of current surveillance laws, urging states to revise and amend these laws to comply with international human rights norms and standards governing the rights to privacy, free expression, peaceful assembly, and freedom of association. The UN Cybercrime Convention must be radically amended to avoid entrenching and expanding these existing abuses globally. If not amended, it must be rejected outright.

How the Convention Fails to Protect Human Rights in Domestic Surveillance

The idea that checks and balances are essential to avoid abuse of power is a basic “Government 101” concept. Yet throughout the negotiation process, Russia and its allies have sought to chip away at the already-weakened human rights safeguards and conditions outlined in Article 24 of the proposed Convention. 

Article 24 as currently drafted requires that every country that agrees to this convention must ensure that when it creates, uses, or applies the surveillance powers and procedures described in the domestic procedural measures, it does so under its own laws. These laws must protect human rights and comply with international human rights law. The principle of proportionality must be respected, meaning any surveillance measures should be appropriate and not excessive in relation to the legitimate aim pursued.

Why Article 24 Falls Short?

1. The Critical Missing Principles

While incorporation of the principle of proportionality in Article 24(1) is commendable, the article still fails to explicitly mention the principles of legality, necessity, and non-discrimination, which hold equivalent status to proportionality in human rights law relative to surveillance activities. A primer:

  • The principle of legality requires that restrictions on human rights including the right to privacy be authorized by laws that are clear, publicized, precise, and predictable, ensuring individuals understand what conduct might lead to restrictions on their human rights.
  • The principles of necessity and proportionality ensure that any interference with human rights is demonstrably necessary to achieving a legitimate aim and only include measures that are proportionate to that aim.
  • The principle of non-discrimination requires that laws, policies and human rights obligations be applied equally and fairly to all individuals, without any form of discrimination based on race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status, including the application of surveillance measures.

Without including all these principles, the safeguards are incomplete and inadequate, increasing the risk of misuse and abuse of surveillance powers.

2. Inadequate Specific Safeguards 

Article 24(2) requires countries to include, where “appropriate,” specific safeguards like:

  • judicial or independent review, meaning surveillance actions must be reviewed or authorized by a judge or an independent regulator.
  • the right to an effective remedy, meaning people must have ways to challenge or seek remedy if their rights are violated.
  • justification and limits, meaning there must be clear reasons for using surveillance and limits on how much surveillance can be done and for how long.

Article 24 (2) introduces three problems:

2.1 The Pitfalls of Making Safeguards Dependent on Domestic Law

Although these safeguards are mentioned, making them contingent on domestic law can vastly weaken their effectiveness, as national laws vary significantly and many of them won’t provide adequate protections. 

2.2 The Risk of Ambiguous Terms Allowing Cherry-Picked Safeguards

The use of vague terms like “as appropriate” in describing how safeguards will apply to individual procedural powers allows for varying interpretations, potentially leading to weaker protections for certain types of data in practice. For example, many states provide minimal or no safeguards for accessing subscriber data or traffic data despite the intrusiveness of resulting surveillance practices. These powers have been used to identify anonymous online activity, to locate and track people, and to map people’s contacts. By granting states broad discretion to decide which safeguards to apply to different surveillance powers, the convention fails to ensure the text will be implemented in accordance with human rights law. Without clear mandatory requirements, there is a real risk that essential protections will be inadequately applied or omitted altogether for certain specific powers, leaving vulnerable populations exposed to severe rights violations. Essentially, a country could just decide that some human rights safeguards are superfluous for a particular kind or method of surveillance, and dispense with them, opening the door for serious human rights abuses.

2.3 Critical Safeguards Missing from Article 24(2)

The need for prior judicial authorization, for transparency, and for user notification is critical to any effective and proportionate surveillance power, but not included in Article 24(2).

Prior judicial authorization means that before any surveillance action is taken, it must be approved by a judge. This ensures an independent assessment of the necessity and proportionality of the surveillance measure before it is implemented. Although Article 24 mentions judicial or other independent review, it lacks a requirement for prior judicial authorization. This is a significant omission that increases the risk of abuse and infringement on individuals' rights. Judicial authorization acts as a critical check on the powers of law enforcement and intelligence agencies.

Transparency involves making the existence and extent of surveillance measures known to the public; people must be fully informed of the laws and practices governing surveillance so that they can hold authorities accountable. Article 24 lacks explicit provisions for transparency, so surveillance measures could be conducted in secrecy, undermining public trust and preventing meaningful oversight. Transparency is essential for ensuring that surveillance powers are not misused and that individuals are aware of how their data might be collected and used.

User notification means that individuals who are subjected to surveillance are informed about it, either at the time of the surveillance or afterward when it no longer jeopardizes the investigation. The absence of a user notification requirement in Article 24(2) deprives people of the opportunity to challenge the legality of the surveillance or seek remedies for any violations of their rights. User notification is a key component of protecting individuals’ rights to privacy and due process. It may be delayed, with appropriate justification, but it must still eventually occur and the convention must recognize this.

Independent oversight involves monitoring by an independent body to ensure that surveillance measures comply with the law and respect human rights. This body can investigate abuses, provide accountability, and recommend corrective actions. While Article 24 mentions judicial or independent review, it does not establish a clear mechanism for ongoing independent oversight. Effective oversight requires a dedicated, impartial body with the authority to review surveillance activities continuously, investigate complaints, and enforce compliance. The lack of a robust oversight mechanism weakens the framework for protecting human rights and allows potential abuses to go unchecked.

Conclusion

While it’s somewhat reassuring that Article 24 acknowledges the binding nature of human rights law and its application to surveillance powers, it is utterly unacceptable how vague the article remains about what that actually means in practice. The “as appropriate” clause is a dangerous loophole, letting states implement intrusive powers with minimal limitations and no prior judicial authorization, only to then disingenuously claim this was “appropriate.” This is a blatant invitation for abuse. There’s nothing “appropriate” about this, and the convention must be unequivocally clear about that.

This draft in its current form is an egregious betrayal of human rights and an open door to unchecked surveillance and systemic abuses. Unless these issues are rectified, Member States must recognize the severe flaws and reject this dangerous convention outright. The risks are too great, the protections too weak, and the potential for abuse too high. It’s long past time to stand firm and demand nothing less than a convention that genuinely safeguards human rights.

Check out our detailed analysis on the criminalization of security research activities under the UN Cybercrime Convention. Stay tuned for our next post, where we'll explore other critical areas affected by the convention, including its scope and human rights safeguards.




If Not Amended, States Must Reject the Flawed Draft UN Cybercrime Convention Criminalizing Security Research and Certain Journalism Activities

This is the first post in a series highlighting the problems and flaws in the proposed UN Cybercrime Convention. Check out The UN Cybercrime Draft Convention is a Blank Check for Surveillance Abuses

The latest and nearly final version of the proposed UN Cybercrime Convention—dated May 23, 2024 but released today June 14—leaves security researchers’ and investigative journalists’ rights perilously unprotected, despite EFF’s repeated warnings.

The world benefits from people who help us understand how technology works and how it can go wrong. Security researchers, whether independently or within academia or the private sector, perform this important role of safeguarding information technology systems. Relying on the freedom to analyze, test, and discuss IT systems, researchers identify vulnerabilities that can cause major harms if left unchecked. Similarly, investigative journalists and whistleblowers play a crucial role in uncovering and reporting on matters of significant public interest including corruption, misconduct, and systemic vulnerabilities, often at great personal risk.

For decades, EFF has fought for security researchers and journalists, provided legal advice to help them navigate murky criminal laws, and advocated for their right to conduct security research without fear of legal repercussions. We’ve helped researchers when they’ve faced threats for performing or publishing their research, including identifying and disclosing critical vulnerabilities in systems. We’ve seen how vague and overbroad laws on unauthorized access have chilled good-faith security research, threatening those who are trying to keep us safe or report on public interest topics. 

Now, just as some governments have individually finally recognized the importance of protecting security researchers’ work, many of the UN convention’s criminalization provisions threaten to spread antiquated and ambiguous language around the world with no meaningful protections for researchers or journalists. If these and other issues are not addressed, the convention poses a global threat to cybersecurity and press freedom, and UN Member States must reject it.

This post will focus on one critical aspect of coders’ rights under the newest released text: the provisions that jeopardize the work of security researchers and investigative journalists. In subsequent posts, Wwe will delve into other aspects of the convention in later posts.

How the Convention Fails to Protect Security Research and Reporting on Public Interest Matters

What Provisions Are We Discussing?

Articles 7 to 11 of the Criminalization Chapter—covering illegal access, illegal interception, interference with electronic data, interference with ICT systems, and misuse of devices—are core cybercrimes of which security researchers often have been accused of such offenses as a result of their work. (In previous drafts of the convention, these were articles 6-10).

  • Illegal Access (Article 7): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization to identify vulnerabilities.
  • Illegal Interception (Article 8): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require malicious criminal intent (mens rea).
  • Interference with Data (Article 9) and Interference with Computer Systems (Article 10): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks that could be described as “interference” even though they don’t cause harm and are performed without criminal malicious intent.

All of these articles fail to include a mandatory element of criminal intent to cause harm, steal, or defraud. A requirement that the activity cause serious harm is also absent from Article 10 and optional in Article 9. These safeguards must be mandatory.

What We Told the UN Drafters of the Convention in Our Letter?

Earlier this year, EFF submitted a detailed letter to the drafters of the UN Cybercrime Convention on behalf of 124 signatories, outlining essential protections for coders. 

Our recommendations included defining unauthorized access to include only those accesses that bypass security measures, and only where such security measures count as effective. The convention’s existing language harks back to cases where people were criminally prosecuted just for editing part of a URL.

We also recommended ensuring that criminalization of actions requires clear malicious or dishonest intent to harm, steal, or infect with malware. And we recommended explicitly exempting good-faith security research and investigative journalism on issues of public interest from criminal liability.

What Has Already Been Approved?

Several provisions of the UN Cybercrime Convention have been approved ad referendum. These include both complete articles and specific paragraphs, indicating varying levels of consensus among the drafters.

Which Articles Has Been Agreed in Full

The following articles have been agreed in full ad referendum, meaning the entire content of these articles has been approved:

    • Article 9: Interference with Electronic Data
    • Article 10: Interference with ICT Systems
    • Article 11: Misuse of Devices 
    • Article 28(4): Search and Seizure Assistance Mandate

We are frustrated to see, for example, that Article 11 (misuse of devices) has been accepted without any modification, and so continues to threaten the development and use of cybersecurity tools. Although it criminalizes creating or obtaining these tools only for purposes of violations of other crimes defined in Articles 7-10 (covering illegal access, illegal interception, interference with electronic data, and interference with ICT systems), those other articles lack mandatory criminal intent requirements and a requirement to define “without right” as bypassing an effective security measure. Because those articles do not specifically exempt activities such as security testing, Article 11 may inadvertently criminalize security research and investigative journalism. It may punish even making or using tools for research purposes if the research, such as security testing, is considered to fall under one of the other crimes.

We are also disappointed that Article 28(4) has also been approved ad referendum. This article could disproportionately empower authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. As we have written before, this provision can be abused to force security experts, software engineers, tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees—under threat of criminal prosecution—to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with data requests to the extent of their abilities.

Which Provisions Has Been Partially Approved?

The broad prohibitions against unauthorized access and interception have already been approved ad referendum, which means:

  • Article 7: Illegal Access (first paragraph agreed ad referendum)
  • Article 8: Illegal Interception (first paragraph agreed ad referendum)

The first paragraph of each of these articles includes language requiring countries to criminalize accessing systems or data or intercepting “without right.” This means that if someone intentionally gets into a computer or network without authorization, or performs one of the other actions called out in subsequent articles, it should be considered a criminal offense in that country. The additional optional requirements, however, are crucial for protecting the work of security researchers and journalists, and are still on the negotiating table and worth fighting for.  

What Has Not Been Agreed Upon Yet?

There is no agreement yet on Paragraph 2 of Article 7 on Illegal Access and Article 8 on illegal interception, which give countries the option to add specific requirements that can vary from article to article. Such safeguards could provide necessary clarifications to prevent criminalization of legal activities and ensure that laws are not misapplied to stifle research, innovation, and reporting on public interest matters. We made clear throughout this negotiation process that these conditions are a crucially important part of all domestic legislation pursuant to the convention. We’re disappointed to see that states have failed to act on any of our recommendations, including the letter we sent in February.

The final text dated May 23, 2024 of the convention is conspicuously silent on several crucial protections for security researchers:

  • There are no explicit exemptions for security researchers or investigative journalists who act in good faith.
  • The requirement for malicious intent remains optional rather than mandatory, leaving room for broad and potentially abusive interpretations.
  • The text does not specify that bypassing security measures should only be considered unauthorized if those measures are effective, nor make that safeguard mandatory.

How Has Similar Phrasing Caused Problems in the Past?

There is a history of overbroad interpretation under laws such as the United States’ Computer Fraud and Abuse Act, and this remains a significant concern with similarly vague language in other jurisdictions. This can also raise concerns well beyond researchers’ and journalists’ work, as when such legislation is invoked by one company to hinder a competitor’s ability to access online systems or create interoperable technologies. EFF’s paper, “Protecting Security Researchers' Rights in the Americas,” has documented numerous instances in which security researchers faced legal threats for their work:

  • MBTA v. Anderson (2008): The Massachusetts Bay Transit Authority (MBTA) used a  cybercrime law to sue three college students who were planning to give a presentation about vulnerabilities in Boston’s subway fare system.
  • Canadian security researcher (2018): A 19-year-old Canadian was accused of unauthorized use of a computer service for downloading public records from a government website.
  • LinkedIn’s cease and desist letter to hiQ Labs, Inc. (2017): LinkedIn invoked cybercrime law against hiQ Labs for “scraping” — accessing publicly available information on LinkedIn’s website using automated tools. Questions and cases related to this topic have continued to arise, although an appeals court ultimately held that scraping public websites does not violate the CFAA. 
  • Canadian security researcher (2014): A security researcher demonstrated a widely known vulnerability that could be used against Canadians filing their taxes. This was acknowledged by the tax authorities and resulted in a delayed tax filing deadline. Although the researcher claimed to have had only positive intentions, he was charged with a cybercrime.
  • Argentina’s prosecution of Joaquín Sorianello (2015): Software developer Joaquín Sorianello uncovered a vulnerability in election systems and faced criminal prosecution for demonstrating this vulnerability, even though the government concluded that he did not intend to harm the systems and did not cause any serious damage to them.

These examples highlight the chilling effect that vague legal provisions can have on the cybersecurity community, deterring valuable research and leaving critical vulnerabilities unaddressed.

Conclusion

The latest draft of the UN Cybercrime Convention represents a tremendous failure to protect coders’ rights. By ignoring essential recommendations and keeping problematic language, the convention risks stifling innovation and undermining cybersecurity. Delegates must push for urgent revisions to safeguard coders’ rightsandrights and ensure that the convention fosters, rather than hinders, the development of a secure digital environment. We are running out of time; action is needed now.

Stay tuned for our next post, in which we will explore other critical areas affected by the proposed convention including its scope and human rights safeguards. 

Hand me the flashlight. I’ll be right back...

Par : M. Jackalope
13 juin 2024 à 03:21

It’s time for the second installment of campfire tales from our friends, The Encryptids—the rarely-seen enigmas who’ve become folk legends. They’re helping us celebrate EFF’s summer membership drive for internet freedom!

Through EFF's 34th birthday on July 10, you can receive 2 rare gifts, be a member for just $20, and as a bonus new recurring monthly or annual donations get a free match! Join us today.

So...do you ever feel like tech companies still own the devices you’ve paid for? Like you don’t have alternatives to corporate choices? Au contraire! Today, Monsieur Jackalope tells us why interoperability plays a key role in giving you freedom in tech...

-Aaron Jue
EFF Membership Team

_______________________________________

Jackalope in a forest saying "Interoperability makes good things great!"C

all me Jacques. Some believe I am cuddly. Others deem me ferocious. Yet I am those things and more. How could anyone tell me what I may be? Beauty lives in creativity, innovation, and yes, even contradiction. When you are confined to what is, you lose sight of what could be. Zut! Here we find ourselves at the mercy of oppressive tech companies who perhaps believe you are better off without choices. But they are wrong.

Control, commerce, and lack of competition. These limit us and rob us of our potential. We are destined for so much more in tech! When I must make repairs on my scooter, do I call Vespa for their approval on my wrenches? Mais non! Then why should we prohibit software tools from interacting with one another? The connected world must not be a darker reflection of this one we already know.

The connected world must not be a darker reflection of this one we already know.

EFF’s team—avec mon ami Cory Doctorow!—advocate powerfully for systems in which we do not need the permission of companies to fix, connect, or play with technology. Oui, c’est difficile: you find copyrighted software in nearly everything, and sparkling proprietary tech lures you toward crystal prisons. But EFF has helped make excellent progress with laws supporting your Right to Repair, they speak out against tech monopolies, they lift up the free and open source software community, and they advocate for creators across the web.

Join EFF

Interoperability makes good things great

You can make a difference in the fight to truly own your devices. Support the EFF’s efforts as a member this year and reach toward the sublime web that interconnection and creativity can bring.

Cordialement,

Monsieur Jackalope

_______________________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

EFF to Ninth Circuit: Abandoning a Phone Should Not Mean Abandoning Its Contents

This post was written by EFF legal intern Danya Hajjaji.

Law enforcement should be required to obtain a warrant to search data contained in abandoned cell phones, EFF and others explained in a friend-of-the-court brief to the Ninth Circuit Court of Appeals.

The case, United States v. Hunt, involves law enforcement’s seizure and search of an iPhone the defendant left behind after being shot and taken to the hospital. The district court held that the iPhone’s physical abandonment meant that the defendant also abandoned the data stored on the phone. In support of the defendant’s appeal, we urged the Ninth Circuit to reverse the district court’s ruling and hold that the Fourth Amendment’s abandonment exception does not apply to cell phones: as it must in other circumstances, law enforcement should generally have to obtain a warrant before it searches someone’s cell phone.

Cell phones differ significantly from other physical property. They are pocket-sized troves of highly sensitive information with immense storage capacity. Today’s phone carries and collects vast and varied data that encapsulates a user’s daily life and innermost thoughts.

Courts—including the US Supreme Court—have recognized that cell phones contain the “sum of an individual’s private life.” And, because of this recognition, law enforcement must generally obtain a warrant before it can search someone’s phone.

While people routinely carry cell phones, they also often lose them. That should not mean losing the data contained on the phones.

While the Fourth Amendment’s ”abandonment doctrine” permits law enforcement to conduct a warrantless seizure or search of an abandoned item, EFF’s brief explains that this precedent does not mechanically apply to cell phones. As the Supreme Court has recognized multiple times, the rote application of case law from prior eras with less invasive and revealing technologies threatens our Fourth Amendment protections.

Our brief goes on to explain that a cell phone owner rarely (if ever) intentionally relinquishes their expectation of privacy and possessory interests in data on their cell phones, as they must for the abandonment doctrine to apply. The realities of the modern cell phone seldom infer a purpose to discard the wealth of data they contain. Cell phone data is not usually confined to the phone itself, and is instead stored in the “cloud” and accessible across multiple devices (such as laptops, tablets, and smartwatches).

We hope the Ninth Circuit recognizes that expanding the abandonment doctrine in the manner envisioned by the district court in Hunt would make today’s cell phone an accessory to the erosion of Fourth Amendment rights.

Encode Justice NC - the Movement for a Safe, Equitable AI

The Electronic Frontier Alliance is proud to have such a diverse membership, and is especially proud to ally with Encode Justice. Encode Justice is a community that includes over 1,000 high school and college students across over 40 U.S. states and 30 countries. Organized into chapters, these young people constitute a global youth movement for safe, equitable AI. Their mission is mobilizing communities for AI aligned with human values.

At its core, Encode Justice is more than just a name. It’s a guiding philosophy: they believe we must encode justice and safety into the technologies we build. Young people are critical stakeholders in conversations about AI, and presently, as we find ourselves face-to-face with challenges like algorithmic bias, misinformation, democratic erosion, and labor displacement; we simultaneously stand on the brink of even larger-scale risks that could result from the loss of human control over increasingly powerful systems. Encode Justice believes human-centered AI must be built, designed, and governed by and for diverse stakeholders, and that AI should help guide us towards our aspirational future, not simply reflect the data of our past and present.

Currently three local chapters of Encode Justice have joined the EFA: Encode Justice North Carolina, Oregon, and Georgia. Recently I caught up with the leader of Encode Justice NC, Siri, about her chapter, their work, and how other people (including youth) can plug in and join the movement for safe, equitable AI:

Can you tell us a little about your chapter, its composition, and its projects?

Encode Justice North Carolina is an Encode Justice chapter led by Siri M while including other high schoolers and college students in NC. Most of us are in the Research Triangle Park area, but we’d also welcome any NC based student that is interested in our work! In the past, we have done projects including educational workshops, policy memos, and legislative campaigns (on the state & city council level) while lobbying officials and building coalitions with other state and local organizations.

Diving more into the work of your chapter, can you elaborate? And are there any local partnerships you’ve made with regard to your legislative advocacy efforts?

We’ve specifically done a lot of work around surveillance, with ‘AI in Policing & Surveillance' being the subject of our educational workshop with the national organization “Paving Tomorrow.” We’ve also lobbied the city council of Cary, NC to pass an ACLU model bill on police surveillance, after gaining support in the campaign from Emancipate NC, the EFA, and BSides RDU. Notably, we have lobbied our state legislature to pass a bill regarding social media addiction and data privacy for youth. Additionally, a policy memo from our chapter was written and published as a part of the Encode Justice State AI legislative project to spread information and analysis on the local legislative landscape, stakeholders, and solutions regarding tech policy related issues in our state. The memo was for legislators, organizations, and press to use.

We’ve also conducted a project to gather student testimonials on AI/school-based surveillance. In the near future, we are looking forward to working on bigger campaigns, including a national legislative facial recognition campaign, and a local campaign on the impacts of surveillance on immigrant communities. We are also more generally looking forward to expanding our reach while gaining new members in more regions of NC, and potentially leading more campaigns and projects while increasing their scope and widening our range of topics. 

How can other youth plug-in to support and join the movement?

Anyone, including non-students, can follow us on Instagram at @encodejusticenc. If you are interested in becoming an Encode Justice North Carolina member, you could please fill out the form to do so! Lastly, if you are a student that would like to support us in a smaller way, you can fill out the student testimonies survey here.

The Next Generation of Cell-Site Simulators is Here. Here’s What We Know.

Dozens of policing agencies are currently using cell-site simulators (CSS) by Jacobs Technology and its Engineering Integration Group (EIG), according to newly-available documents on how that company provides CSS capabilities to local law enforcement. 

A proposal document from Jacobs Technology, provided to the Massachusetts State Police (MSP) and first spotted by the Boston Institute for Nonprofit Journalism (BINJ), outlines elements of the company’s CSS services, which include discreet integration of the CSS system into a Chevrolet Silverado and lifetime technical support. The proposal document is part of a winning bid Jacobs submitted to MSP earlier this year for a nearly $1-million contract to provide CSS services, representing the latest customer for one of the largest providers of CSS equipment.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police. Source: 2024 Jacobs Proposal Response

The proposal document from Jacobs provides some of the most comprehensive information about modern CSS that the public has had access to in years. It confirms that law enforcement has access to CSS capable of operating on 5G as well as older cellular standards. It also gives us our first look at modern CSS hardware. The Jacobs system runs on at least nine software-defined radios that simulate cellular network protocols on multiple frequencies and can also gather wifi intelligence. As these documents describe, these CSS are meant to be concealed within a common vehicle. Antennas are hidden under a false roof so nothing can be seen outside the vehicles, which is a shift from the more visible antennas and cargo van-sized deployments we’ve seen before.  The system also comes with a TRACHEA2+ and JUGULAR2+ for direction finding and mobile direction finding. 

The Jacobs 5G CSS base station system.

The Jacobs 5G CSS base station system. Source: 2024 Jacobs Proposal Response

CSS, also known as IMSI catchers, are among law enforcement’s most closely-guarded secret surveillance tools. They act like real cell phone towers, “tricking” mobile devices into connecting to them, designed to intercept the information that phones send and receive, like the location of the user and metadata for phone calls, text messages, and other app traffic. CSS are highly invasive and used discreetly. In the past, law enforcement used a technique called “parallel construction”—collecting evidence in a different way to reach an existing conclusion in order to avoid disclosing how law enforcement originally collected it—to circumvent public disclosure of location findings made through CSS. In Massachusetts, agencies are expected to get a warrant before conducting any cell-based location tracking. The City of Boston is also known to own a CSS. 

This technology is like a dragging fishing net, rather than a focused single hook in the water. Every phone in the vicinity connects with the device; even people completely unrelated to an investigation get wrapped up in the surveillance. CSS, like other surveillance technologies, subjects civilians to widespread data collection, even those who have not been involved with a crime, and has been used against protestors and other protected groups, undermining their civil liberties. Their adoption should require public disclosure, but this rarely occurs. These new records provide insight into the continued adoption of this technology. It remains unclear whether MSP has policies to govern its use. CSS may also interfere with the ability to call emergency services, especially for people who have to use accessibility technologies for those who cannot hear.

Important to the MSP contract is the modification of a Chevrolet Silverado with the CSS system. This includes both the surreptitious installment of the CSS hardware into the truck and the integration of its software user interface into the navigational system of the vehicle. According to Jacobs, this is the kind of installation with which they have a lot of experience.

Jacobs has built its CSS project on military and intelligence community relationships, which are now informing development of a tool used in domestic communities, not foreign warzones in the years after September 11, 2001. Harris Corporation, later L3Harris Technologies, Inc., was the largest provider of CSS technology to domestic law enforcement but stopped selling to non-federal agencies in 2020. Once Harris stopped selling to local law enforcement the market was open to several competitors, one of the largest of which was KeyW Corporation. Following Jacobs’s 2019 acquisition of The KeyW Corporation and its Engineering Integration Group (EIG), Jacobs is now a leading provider of CSS to police, and it claims to have more than 300 current CSS deployments globally. EIG’s CSS engineers have experience with the tool dating to late 2001, and they now provide the spectrum of CSS-related services to clients, including integration into vehicles, training, and maintenance, according to the document. Jacobs CSS equipment is operational in 35 state and local police departments, according to the documents.

EFF has been able to identify 13 agencies using the Jacobs equipment, and, according to EFF’s Atlas of Surveillance, more than 70 police departments have been known to use CSS. Our team is currently investigating possible acquisitions in California, Massachusetts, Michigan, and Virginia. 

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system.

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system. Source: 2024 Jacobs Proposal Response

The proposal also includes details on other agencies’ use of the tool, including that of the Fontana, CA Police Department, which it says has deployed its CSS more than 300 times between 2022 and 2023, and Prince George's County Sheriff (MO), which has also had a Chevrolet Silverado outfitted with CSS. 

Jacobs isn’t the lone competitor in the domestic CSS market. Cognyte Software and Tactical Support Equipment, Inc. also bid on the MSP contract, and last month, the City of Albuquerque closed a call for a cell-site simulator that it awarded to Cognyte Software Ltd. 

Shhh. Did you hear that?

It’s Day One of EFF’s summer membership drive for internet freedom! Gather round the virtual campfire because I’ve got special treats and a story for you:

  1. New member t-shirts and limited-edition gear drop TODAY.

  2. Through EFF's 34th birthday on July 10, you can get 2 rare gifts and become an EFF member for just $20! AND new automatic monthly or annual donors get an instant match.

  3. I’m proud to share the first post in a series from our friends, The Encryptids—the rarely-seen enigmas who inspire campfire lore. But this time, they’re spilling secrets about how they survive this ever-digital world. We begin by checking in with the legendary Bigfoot de la Sasquatch...

-Aaron
EFF Membership Team

____________________________

Bigfoot with sunglasses in a forest saying "Privacy is a human right."

P

eople say I'm the most famous of The Encryptids, but sometimes I don't want the spotlight. They all want a piece of me: exes, ad trackers, scammers, even the government. A picture may be worth a thousand words, but my digital profile is worth cash (to skeezy data brokers). I can’t hit a city block without being captured by doorbell cameras, CCTV, license plate readers, and a maze of street-level surveillance. It can make you want to give up on privacy altogether. Honey, no. Why should you have to hole up in some dank, busted forest for freedom and respect? You don’t.

Privacy isn't about hiding. It's about revealing what you want to who you want on your terms. It's your basic right to dignity.

Privacy isn't about hiding...It's your basic right to dignity.

A wise EFF technologist once told me, “Nothing makes you a ghost online.” So what we need is control, sweetie! You're not on your own! EFF worked for decades to set legal precedents for us, to push for good policy, fight crap policy, and create tools so you can be more private and secure on the web RIGHT NOW. They even have whole ass guides that help people around the world protect themselves online. For free!

I know a few things about strangers up in your business, leaked photos, and wanting to live in peace. Your rights and freedoms are too important to leave them up to tech companies and politicians. This world is a better place for having people like the lawyers, activists, and techs at EFF.

Join EFF

Privacy is a "human" right

Privacy is a team sport and the team needs you. Sign up with EFF today and not only can you get fun stuff (featuring ya boy Footy), you’ll make the internet better for everyone.

XOXO,

Bigfoot DLS

____________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

EFF Covers Secrets in Your Data on NOVA

It’s the weekend. You decide you want to do something fun with your family—maybe go to a local festival or park. So, you start searching on your favorite social media app to see what other people are doing. Soon after, you get ads on other platforms about the activities you were just looking at. What the heck?

That’s the reality we’re in today. As EFF’s Associate Director of Legislative Activism Hayley Tsukayama puts it, “That puts people in a really difficult position, when we’re supposed to manage our own privacy, but we’re also supposed to use all these things that are products that will make our lives better.”

Watch EFF’s Cory Doctorow, Eva Galperin, Hayley Tsukayama, and others in the digital rights community explain how your data gets scooped up by data brokers—and common practices to protect your privacy online—in Secrets in Your Data on NOVΛ. You can watch the premier or read the transcript here below:

Person looking at a screen showing their personal information.

Watch Secrets in Your Data on PBS.org

EFF continues pushing for a comprehensive data privacy law that would reign in data brokers' ability to collect our information and share it to the highest bidders, including law enforcement. Additionally, you can use these resources to help keep you safe online

The UN Cybercrime Draft Convention Remains Too Flawed to Adopt

The proposed UN Cybercrime Convention, scheduled for a critical concluding session from 29 July to August 9th, poses a significant threat to global human rights unless major changes are made. Despite two and a half years of intense discussions and seven negotiation sessions, states remain deeply divided on fundamental aspects, leading to a deeply  flawed draft text and a problematic chair’s proposal from February 2024. They can’t even agree what to call the Convention, much less its scope—should it address only core cybercrime, or any crime committed using technology? 

The February 2024 language continues to risk criminalizing protected speech, granting broad surveillance powers without robust safeguards, and raising serious cybersecurity concerns. Despite continuous advocacy from civil society and industry, these key issues remain unaddressed. A new version of the Convention is expected soon, but without addressing these critical flaws, the risks to human rights remain.

Joint NGO Letter and EFF's Redlines

In a joint letter with over 100 NGOs, we state that the Cybercrime Convention must not advance without addressing critical flaws. The letter outlines clear requirements: the Convention must focus solely on cyber-dependent crimes, incorporate comprehensive human rights safeguards, and ensure robust protections for security researchers, whistleblowers, activists, and journalists. Absent these minimum requirements, we call on state delegations to reject the draft Convention and refuse to advance it to the UN General Assembly for adoption.

EFF echoes such requirements, among others:

  • First, the Convention must be narrowly focused on cyber-dependent crimes, excluding overly broad content-related crimes that contradict human rights law from the proposed Convention.
  • Second, it must include robust protections for security researchers, whistleblowers, activists, and journalists to ensure they are not unjustly criminalized for performing their essential work.
  • Third, it must incorporate comprehensive human rights safeguards, including the principles of legality, non-discrimination, legitimate purpose, necessity, proportionality, transparency, effective remedy, and prior judicial authorization applicable throughout the entire Convention.
  • Fourth, the scope of procedural measures and international cooperation must be limited to the defined cyber-dependent crimes, with explicit minimum robust safeguards against abuses of surveillance and data sharing, and adequate protection of personal data. 
  • Fifth, direct sharing of personal data must be limited to specific criminal investigation, and be subject to robust minimum safeguards mandated in the text itself to prevent misuse, such as the need to comply with the principles of legality, necessity, proportionality, transparency, user notification, and the need for prior judicial authorization.
  • Sixth, proactive sharing of personal data must be strictly limited and conditioned on compliance with minimum robust standards and international human rights law.

As is, the Convention will be a tool for states with repressive domestic laws to impose arbitrary and disproportionate restrictions on rights and freedoms. As the negotiations resume, it is crucial to address these issues and ensure the Convention aligns with international human rights standards to prevent disaster.

Many other NGOs and industry representatives have expressed similar concerns about the proposed UN Cybercrime Convention. You can read their detailed opinions here: Human Rights Watch and Article 19, Privacy International, Global Partners Digital, Derechos DigitalesMicrosoft, Cybersecurity Tech Accord, and a joint civil society and industry statement.

Origins and Development of the Convention 

The proposed UN Cybercrime Convention's journey began in October 2017 when Russia proposed a draft, aiming to tackle the “use of information and communication technology for criminal purposes.” This effort gained momentum in November 2019 when a UN Resolution, backed by a block of nations that included China, Iran, and Syria, was passed despite strong opposition from the US, EU and others.

By December 2019, the UN General Assembly adopted a Resolution to form an Ad Hoc Committee (AHC) to draft the Convention. The process faced delays due to COVID-19, with the first organizational meeting postponed to 2021. Despite initial resistance, the AHC's inaugural session in May 2021 saw participation from over 160 countries, outlining a plan for multiple negotiating sessions. The AHC mandate specifies that the Convention must “conclude its work in order to provide a draft Convention to the General Assembly at its seventy-eighth session in September 2024.”

EFF has been involved in the UN Cybercrime Convention process from the start, though we've always been skeptical about its necessity due to the significant risks it poses to human rights. Together with a coalition of 130 NGOs, we have consistently raised alarms about the potential misuse of cybercrime laws to target dissent, activists, advocates, security researchers, and journalists. Our concerns, shared with allies, date back way before the first substantive session began in 2022. In 2021, the UN General Assembly expressed grave concerns that cybercrime legislation was being misused to target human rights defenders, hinder their work, and endanger their safety in a manner contrary to international law.  

The UN Special Rapporteur on the rights to freedom of peaceful assembly and association has noted that the increasing number of laws and policies aimed at combating cybercrime have often been used as a means to punish and monitor activists and protesters globally. The Special Rapporteur highlighted that although technology can indeed be used “to promote terrorism, incite violence, and manipulate elections, these concerns are frequently exploited to justify crackdowns on digital civil society.” 

As is, the Convention will be a tool for states with repressive domestic laws to impose arbitrary and disproportionate restrictions on rights and freedoms.

This sentiment has been echoed by the the Office of the High Commissioner for Human Rights in 2022, highlighting that national cybercrime laws are often used to "restrict freedom of expression, target dissenting voices, justify internet shutdowns, interfere with privacy and anonymity of communications, and limit the rights to freedom of association and peaceful assembly." 

Analyzing the Convention’s Expansive Reach and Human Rights Concerns

Article 3: Scope of the Convention

Article 3 outlines the scope of the UN Cybercrime Convention, dividing it into two crucial parts. Article 3(a) limits the scope of application to crimes “established in accordance with the Convention,” covering their prevention, investigation, and prosecution. In contrast, Article 3(b) broadens the reach to include domestic (Article 23) and international cooperation (Article 35), including evidence-gathering for activities deemed serious by national law, expanding the Convention's application to a wide array of any serious offenses regardless of their connection to cybercrime. Understanding this difference is key to grasping the potential impact and reach of the Convention.

EFF has consistently argued that the Convention should be limited to core or cyber-dependent crimes—offenses in which computer systems are the direct objects and instruments, crimes which could not exist without information and communications technology (ICT) systems. By focusing exclusively on these core cybercrimes, the Convention would allow states to concentrate their resources, expertise, and capacity-building on these specific offenses. This approach would also prevent cross-border cooperation on a range of other offenses that are often antithetical to human rights. 

This limitation should apply to the criminalization chapter and the chapter on international cooperation (including spying assistance and data sharing powers), and even to the chapter on  domestic spying powers. Core cybercrimes include unauthorized access to ICT systems, illegal interception, damaging, deleting, deteriorating, altering, or suppressing electronic data, hindering the functioning of ICT systems, and misuse of devices.

Regrettably, the Convention is broader in scope than just core cybercrimes. It addresses cyber-enabled crimes, which are traditional crimes that may in certain instances be facilitated or amplified by the use of technology. These crimes leverage the reach, speed, and anonymity provided by the internet and other digital platforms to enhance their impact, such as ICT-related theft or fraud (Article 12), and solicitation or grooming for sexual offenses against children (Article 14).

It also includes overly broad and vague content-related offenses—crimes that involve the creation, distribution, or possession of material considered illegal or harmful, such as online child sexual abuse material (Article 13), non-consensual dissemination of intimate images (Article 15)—which can lead to the over-criminalization of protected speech.

Regrettably, the Convention is broader in scope than just core cybercrimes.

On tIIn the spying front, the proposed convention also allows for extensive data sharing and cross-border assistance to gather evidence for any crime a state deems serious in its national law. The Convention also deals with extradition and lacks clear limitations and minimum human rights safeguards explicitly embedded in the text itself, and thus risks becoming a tool for human rights abuses and transnational repression, undermining cybersecurity and the very principles it aims to protect.

Human Rights Safeguards

The proposed convention has two articles on human rights that could potentially limit its broad scope and intrusive surveillance powers: a general provision under Article 5, which applies to the entire draft convention, and Article 24, which describes the conditions and safeguards for new domestic surveillance powers.  However, both articles are insufficient and inadequate to provide meaningful protections in practice.

Article 5: General Human Rights Provisions 

First, it should mandate compliance with human rights obligations, not merely consistency. This less stringent wording would allow for broader interpretation by States, and potentially looser application, which could lead to inconsistent protection across different jurisdictions as states with weaker human rights records may interpret "consistent with" in a way that minimally satisfies their obligations without fully protecting individuals' rights.

Second, Article 5 fails to explicitly incorporate core tenets of human rights including the principles of legality, necessity, proportionality, and non-discrimination, and generally fails to impose explicit limitations. In practice, this means that many elements of the convention are likely to be implemented in ways that fall short of international human rights standards. Notably, some prospective signatories to this convention have refused to sign and ratify core human rights instruments such as the ICCPR, and in negotiations a number of states have explicitly rejected attempts to incorporate equality rights into Article 5, including the obligation to mainstream a gender perspective and to take into consideration, when implementing this convention, the circumstances of people who face marginalization in society. Uruguay, for example, has proposed that integrating language on gender, vulnerable groups, and rule of law safeguards.

One of the critical components of effective human rights safeguards is the inclusion of prior judicial authorization, transparency and user notification.

Article 24: Conditions and Safeguards for Domestic Surveillance Powers

Article 24 of the proposed UN Cybercrime Convention outlines how states should protect human rights when using domestic surveillance powers.  While Article 24 helpfully incorporates the principle of proportionality—a central human rights principle—it fails to explicitly include the principles of legality, necessity and non-discrimination. The principle of legality requires laws to be clear, publicized, and precise, ensuring individuals understand what is criminalized. The principle of necessity ensures any interference with human rights is proportionate to achieving a legitimate aim. The principle of non-discrimination requires that laws and policies be applied equally and fairly to all individuals, without any form of discrimination based on race, color, sex, language, religion, political or other opinion, national or social origin, property, birth, or other status. Without these principles, the safeguards are incomplete and inadequate, increasing the risk of misuse and abuse of surveillance powers.

One of the critical components of effective human rights safeguards is the inclusion of prior judicial authorization, transparency, user notification, and the right to an effective remedy. The Chair’s Proposal specifies in Article 24(2) that conditions and safeguards should "include, inter alia, judicial or other independent review, the right to an effective remedy, grounds justifying application, and limitation of the scope and duration of such power or procedure." However, making these safeguards contingent on domestic law can weaken their effectiveness, as national laws vary significantly and may not provide adequate protections. Moreover, while both versions of Article 24 incorporate the principle of proportionality, they fail to explicitly include the principles of legality and necessity. The principle of legality requires laws to be clear, publicized, and precise, ensuring individuals understand what is criminalized. The principle of necessity ensures any interference with human rights is proportionate to achieving a legitimate aim. By granting states broad discretion to decide what safeguard to apply in relation to which surveillance power, the convention fails to ensure the text will be implemented in a manner that is in accordance with human rights. 

To address these issues, the Special Rapporteur has already called on states to revise and amend (...)  surveillance (...) and bring them into compliance with international human rights norms and standards governing the right to privacy, the right to free expression, peaceful assembly, and freedom of association. This issue remains unresolved, and the current convention risks perpetuating these existing concerns.

Domestic Spying Powers and Domestic Safeguards

The Convention grants extensive domestic surveillance powers to gather evidence for any crime, accompanied by minimal and insufficient safeguards, many of which do not even apply to its chapter on cross-border surveillance (Chapter V).  Key measures include expedited preservation of electronic data (Article 25), production orders for specific data (Article 27), and real-time collection of traffic and content data (Articles 29 and 30). These provisions enable rapid and comprehensive data access, essential for investigating cybercrimes. One particularly troubling aspect is Article 28(4), which allows authorities to compel individuals with knowledge of ICT systems to provide necessary information for accessing data. We has consistently voiced concerns that this provision could lead to forced assistance without adequate protection for the rights of those compelled. This broad and potentially coercive power risks significant abuse, especially in jurisdictions lacking strong human rights safeguards.

The combination of intrusive domestic surveillance powers paired with insufficient safeguards heightens the risk of misuse, potentially leading to arbitrary and disproportionate restrictions on privacy and other human rights. To illustrate the potential risks of granting states broad discretion in applying safeguards, consider the following examples:

  1. Lack of legal protection of subscriber data: This threatens the anonymity of the LGBTQ+ community, making them vulnerable to identification and subsequent persecution. Without strong safeguards and a narrow scope, the mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their subscribers' identities being disclosed, putting them at significant risk. Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be monitored, collected, intercepted and turned against them.
  2. Metadata Tracking: A country could classify metadata, such as location data, with less stringent protections compared to content data, leading to extensive tracking of individuals' movements without adequate oversight. 
  3. Weak Judicial Oversight: In a country with a weak judicial system, surveillance activities might not require judicial oversight or prior judicial authorization, allowing authorities to conduct intrusive surveillance without proper scrutiny. 
  4. Discriminatory Surveillance Practices: Broad discretion could enable discriminatory surveillance practices, disproportionately targeting certain ethnic or religious groups under the pretext of “protecting the children.”
  5. International Data Sharing: Without clear limitations, a country could share surveillance data internationally, risking the persecution of political dissidents or human rights activists in countries with poor human rights records.
  6. Lack of TransparencyA lack of transparency requirements for surveillance activities could prevent individuals from knowing whether they are being surveilled or challenging unlawful surveillance. 
  7. Weak Protections for Digital CommunicationsLastly, weak protections for digital communications such as emails and instant messages could allow authorities to intercept and read private communications without robust legal safeguards or oversight. 

For safeguards to be meaningful, the Convention should mandate prior approval by a judge for surveillance activities. As specified in the Necessary and Proportionate Principles, meaningful safeguards should also set strict time limits and establish transparency obligations, such as notifying individuals when their personal data has been accessed. While the Chair’s Proposal includes the right to an effective remedy, individuals cannot effectively exercise this right if they are unaware that their data was accessed, especially in cases where the investigation does not lead to legal proceedings. The authorities should also be required to explain the specific facts that justify surveilling particular individuals and publicly report the frequency of using these powers.

In conclusion, while the Chair’s  Proposal makes some improvements by explicitly including the right to an effective remedy and continuing to recognize the principle of proportionality, its reliance on domestic law for oversight significantly weakens the protection of human rights. The absence of the principles of legality and necessity, combined with the broad discretion given to States, heightens the risk of misuse and abuse of surveillance powers. To truly safeguard human rights, the Convention must mandate strict compliance with international human rights standards and ensure comprehensive and consistent application of safeguards across all states.

The Dangers of Cross-Border Surveillance and Data Sharing

Scope Creep in International Cooperation

One might assume a "cybercrime" convention would focus exclusively on cybercrimes. However, the principles of international cooperation in this convention exemplify significant and dangerous scope creep. And without mandated safeguards in the convention itself for this chapter, this opens the door wide for abuse and transnational repression.

The scope of the international cooperation chapter is still notably wide, and is one primary reason that we've repeatedly said that this convention is truly an all-purpose global surveillance instrument:

  • Article 35(1)(b) of the chair's proposal requires states to cooperate in the collection, obtaining, preservation, and sharing of electronic evidence for criminal investigations or proceedings of criminal offenses established in accordance with the Convention. Essentially, this means that states are obliged to assist each other in managing electronic evidence related to Articles 6-16, regardless of their severity;
  • Article 35(1)(c) of the chair's proposal significantly broadens the scope of international cooperation by including the collection, obtaining, preservation, and sharing of electronic evidence for any activity deemed serious by national law. The defining criteria for "serious" is a crime that carries a prison term of at least four years, as stated in Article 2(1)(h) of the convention. Importantly, the crime itself is defined by the national law of the state requesting cooperation. The only requirement set by the convention is the severity of the penalty (a prison term of at least four years). Therefore, as long as the national law includes a crime punishable by at least four years of imprisonment, it qualifies for international cooperation under this provision. This is applicable whether the alleged offense is cybercrime or not. This also includes serious offenses established in accordance with “other applicable United Nations conventions and protocols in force at the time of adoption” of the Convention.

 This broad scope could lead to abuses, particularly in countries with weaker human rights protections, where national laws might include offenses that do not align with international human rights standards.

Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures that are in stark contradiction with international human rights law and UN values. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. In certain countries, many of these criminal laws might be based on subjective moral judgments that suppress what is considered protected speech under international human rights standards. 

As such, these provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in countries like Iran. For example, activists urged the UN to relocate Cop27 from Egypt due to concerns over Egypt’s record of LGBTQ+ torture, woman slaughter, civil rights suppression, and limitations on the participation of diverse voices, including protesters and indigenous rights groups.

The Special Rapporteur on the rights to freedom of peaceful assembly and association has observed that states increasingly use technology to silence, surveil, and harass dissidents, political opposition, human rights defenders, activists, and protesters, as well as manipulate public opinion. This includes the use of digital surveillance (...) to suppress civil society activities.

Effectively, whenever countries deem any criminal act to be subject to a prison term of at least four years in their domestic law, they can use the Convention to ask other governments to assist in spying to collect evidence, even if they are speech offenses or otherwise criminalize human rights protected activities. All these illustrate how repressive regimes can exploit the broad scope of the Convention’s international cooperation regime—including cross-border spying assistance, and extradition—to gather evidence and target marginalized communities, posing significant human rights problems.

Even worse, the situation is exacerbated by the fact that cross-border data sharing and surveillance assistance between states are not subject to the safeguards in Article 24. Instead, the safeguards will be those of the requesting country, whatever that standard may be, further amplifying the risk of human rights abuses and transnational repression.

Transnational repression refers to actions by governments that reach beyond their borders to silence dissent among their nationals abroad through tactics like surveillance, harassment, and intimidation. For decades, Human Rights Watch has documented governments reaching outside their borders to silence or deter dissent by committing human rights abuses against their own nationals or former nationals. Governments have targeted human rights defenders, journalists, civil society activists, and political opponents, among others, deemed to be a security threat. Many are asylum seekers or recognized refugees in their place of exile. These governmental actions beyond borders leave individuals unable to find genuine safety for themselves and their families. See table of cases at the end.

According to research by Freedom House, the top five perpetrators of transnational repression are China, Turkey, Tajikistan, Egypt, and Russia. Followed by Turkmenistan, Uzbekistan, Iran, Belarus, and Rwanda, with the 10 nations collectively responsible for 80 percent of documented cases. China alone accounts for 30 percent of these cases.

It is a growing concern that poses significant challenges to international human rights norms and protections. Several other organizations have also been warning that existing international law enforcement cooperation mechanisms are being abused or twisted to allow political repression even beyond forceful data localization mandates that seek to bypass international cooperation rules. 

INTERPOL, for instance, is an intergovernmental organization of 193 countries that facilitates worldwide police cooperation. But Human Rights Watch has documented numerous allegations of how China, Bahrain, Turkey, and other countries have abused INTERPOL’s Red Notice system—a request to law enforcement worldwide to “locate and provisionally arrest a person pending extradition, surrender, or similar legal action”—to locate peaceful critics of government policies ostensibly for minor offenses but really, for political gain

While states continue to negotiate over whether some of the conventions’ specific cross-border surveillance powers will be limited in application to a subset of crimes, the overall impact of the convention is concerning. By obligating states to process cooperation requests in relation to any offense deemed serious as defined by national law, the convention’s broad scope threatens to overwhelm the ability of already overburdened legal assistance bodies to ensure they are processing requests in a way that is consistent with their own human rights obligations. It would also operate as an internationally authorized vehicle of cooperation between states where the rule of law has broken down and which have a track record of abusing international cooperation instruments for repression.

While some democratic countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. First, grounds for refusal are optional, not obligatory. The draft convention allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the mutual legal assistance treaty (MLAT) system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the cracks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. Second, where countries do share the same subjective values and problematically criminalize the same conduct, this draft convention seemingly provides a justification for their cooperation. And even governments that claim to uphold free expression and privacy domestically frequently abandon these principles in international cooperation, especially under the pretext of counterterrorism.

It's now less likely that governments will refuse mutual legal assistance requests on human rights grounds

Third, as we previously discussed with Deborah Brown, with the rise of cloud computing and companies storing data in various countries, including those with poor human rights records like Saudi Arabia, it's now less likely that governments will refuse mutual legal assistance requests on human rights grounds. In the past, most data was stored in only a handful of countries, making it easier to deny disproportionate requests. Today, with data scattered across multiple jurisdictions, enforcing human rights protections becomes more complicated and less consistent.

Article 40: Mutual Legal Assistance (MLA)

Article 40 outlines the principles and procedures for mutual legal assistance (MLA) between states. It mandates that states provide the broadest measure of MLA in investigations, prosecutions, and judicial proceedings related to offenses established "in accordance with the Convention," specifically those outlined in Articles 6 to 16, which cover various cybercrimes. The article sets the framework for cooperation in collecting electronic evidence and ensures that MLA is provided to the fullest extent possible under relevant laws and treaties. There is a bracket in Article 40(1) ["as well as of serious crimes"] indicating the text has received preliminary approval during informal discussions, but the bracket is still under negotiation and has not yet been finalized. The inclusion of "serious crimes" would broaden the scope of mutual legal assistance to include serious crimes beyond those specifically defined in the Convention, pending consensus among the negotiating states. 

Additionally, Article 40(8) of the Convention allows countries to refuse requests for help if: the request doesn’t follow the rules of the Convention; helping would harm the country’s sovereignty, security, or other important interests; the requested action would be illegal under the requested country’s own laws if it were applied to a similar crime within their jurisdiction; or granting the request would go against the requested country’s legal system. However, these grounds of refusal are not enough. The chair has proposed the addition of Article 40.20 (bis), allowing states to refuse mutual legal assistance if the request is believed to be made for political purposes or to prosecute someone based on their political opinions, sex, race, language, religion, nationality, or ethnic origin. However, the high evidentiary threshold may limit the practical effectiveness of this safeguard, making it difficult for states to justify refusals and potentially allowing such requests to proceed. 

Article 40.4: Proactive Information Sharing and Its Risks

Article 40.4 also allows authorities to share information about criminal matters with foreign counterparts proactively, without a formal request. While intended to facilitate international cooperation, this provision poses significant risks to privacy and data protection. Without stringent safeguards, sensitive personal data could be shared too freely, potentially leading to misuse, especially if the receiving country lacks strong data protection laws. Article 40.4 must be amended to ensure that personal data is only shared when absolutely necessary for specific criminal investigations, prosecutions, and judicial proceedings, and with robust data protections rules in place.

Article 47: Extensive Data Sharing for Investigative Purposes

Article 47 also presents significant and troubling legal challenges due to its expansive scope and the absence of essential safeguards. This new version continues to authorize extensive cooperation among States Parties, including the sharing of personal and sensitive data for analytical or investigative purposes, but now it has been limited to a set of crimes. However, it fails to incorporate critical protections found in Article 24, such as principles of legality, necessity, proportionality, transparency, prior judicial authorization, and robust data protection measures. This omission is alarming, as it could permit the unregulated exchange of  potentially biometric, traffic, and location data. The provision's lack of specificity and its disconnection from particular criminal investigations or proceedings exacerbate these concerns, potentially enabling large scale data-sharing and the targeting of vulnerable populations, including journalists, activists, and minority groups.

Moreover, the absence of oversight by central authorities and the lack of clear limitations or exclusions for sharing sensitive personal data further amplify the risk of human rights violations. It is imperative that this article be fundamentally revised to include robust human rights protections, ensuring that international cooperation does not come at the expense of civil liberties and data protection.

In conclusion, the breadth of the cross-border regime and the absence of adequate human rights safeguards will facilitate human rights abuses by allowing states to request assistance in national investigations. Disagreements—from the broad scope to the absence of robust minimum human rights safeguards—are deep and substantive, and continue to be on the negotiating table, albeit now in closed-door informal meetings. Yet despite these fundamental issues, negotiators continue to present compromises that sweep these problems under the rug as a manufactured potential consensus

The breadth of the cross-border regime and the absence of adequate human rights safeguards will facilitate human rights abuses

The next version of the Convention’s text, expected early June, must address these issues that were left unresolved in the chair’s compromise text published in February 2024. Critical unanswered questions remain. The text continues to reflect the deep divides among states. Minimal progress has been made in limiting the convention's scope of cross border spying assistance and data sharing or strengthening human rights safeguards, even less in ensuring these safeguards apply to the international cooperation chapter. Prioritizing consensus over human rights protections risks disproportionate surveillance abuses and significant erosion of privacy and freedom of expression. EFF and a coalition of NGOs have consistently warned about the dangers of such compromises, cautioning that "there is a real risk that, in an attempt to entice all States to sign a proposed UN cybercrime convention, bad human rights practices will be accommodated, resulting in a race to the bottom.”

Missed Opportunities: The Exclusion of Key Safeguards 

To mitigate the harm of the Convention’s broad scope and limited safeguards, during the January session Canada proposed an amendment to Article 3, to narrow the application of the Convention so it does not apply to acts of repression.

“Nothing in this Convention shall be interpreted as permitting or facilitating repression of expression, conscience, opinion, belief, peaceful assembly or association; or permitting or facilitating discrimination or persecution based on individual characteristics.”

 This proposal would, in principle, render some of the Convention’s more problematic features such as its cross-border cooperation regime inapplicable to acts of repression or discrimination.

The current chair's proposal would permit (but not require) states to refuse cross-border MLA requests that are politically motivated or discriminatory, provided there are substantial grounds for believing this to be the case. However, the requirement for substantial grounds sets a high evidentiary threshold that may limit the practical effectiveness of this safeguard, making it challenging for states to justify refusals and potentially allowing politically motivated or discriminatory requests to proceed.

Similarly, Article 59 (3) of the chair's proposal is intended to safeguard human rights by ensuring that the Convention cannot be used to justify unlawful restrictions on human rights and fundamental freedoms. However, its general language and lack of specific enforcement mechanisms render it weak. The provision relies on the interpretation and goodwill of states, which can vary significantly, particularly in jurisdictions with poor human rights records. 

Neither of these proposals, however, would solve all of the Convention’s ills. Rights-respecting states will be better equipped to refuse requests that conflict with their human rights obligations, but the Convention's broad scope will flood national MLAT units with requests from governments around the world in relation to all serious crimes. 

This will make it far more difficult for these already over-burdened MLAT units to identify human rights abuses when processing foreign requests. Canada’s proposal would also further permit impacted people to challenge government action directly on the basis that it falls outside the scope of the Convention, including action taken on the basis of its substantive criminal provisions and its domestic surveillance powers. However, the Convention includes a number of secrecy provisions and fails to include an individual notice obligation. As a result, individuals rarely will be aware that they are the object of a request and will have limited opportunities to challenge these on the basis that they fall outside the scope of the Convention.

Nonetheless, these proposals would have provided tools to mitigate some of the convention’s more problematic aspects, yet neither is included in the current text.

Broadening Criminalization: Risks of Overreach and Repression in the Convention

Since the start of the process, a number of states have pushed for including a much expanded list of criminal offenses in the convention, simply on the basis these offenses were committed using communications technologies. These include proposals for vaguely defined “terrorism” crimes and offenses that would criminalize “incitement to subversion”.  

The chair’s amendment Article 60bis (Article 17 in previous versions) ensures that offenses established under other applicable United Nations conventions and protocols are also considered criminal offenses under domestic law when committed through the use of information and communications technology systems. The provision is improved over past proposals which would have applied to all present and future conventions, but continues to be a source of concern in that it could require the creation of new offenses based on convention’s obligations that were not designed with ICT networks in mind.

Article 60bis is also an improvement over its predecessor in that it adds subsection (2), which clarifies that Article 60bis “shall not be interpreted as establishing offenses under this Convention.” As a number of the Convention’s provisions are carefully limited to offenses “established in accordance with the Convention,” including the convention’s extradition provision, this could have the impact of limiting those provisions so that they do not apply to Article 60bis offenses. However, as our ally ARTICLE 19 pointed out, subtle differences in language might mean that Article 60bis offenses might be considered as established “in accordance with the Convention” despite not being “established under this Convention”, resulting in a far greater scope of application.

One surprising element of the chair’s compromise was its inclusion of a proposal to extend the mandate of the Ad Hoc Committee to negotiate a future protocol supplementing the Convention immediately upon adoption of the Convention by the General Assembly. This could include another list of crimes for a subset of states, further expanding the Convention's reach and exacerbating the risk of human rights abuses.

Real-World Implications

The proposed UN Cybercrime Convention, with its broad cross-border assistance scope and lack of minimum robust safeguards, poses significant risks to human rights. The potential for misuse and abuse is not theoretical: It is a reality faced by individuals and communities around the world. The proposed convention amplifies the existing threats to the LGBTQ+ community, journalists, activists and minority religious groups among others. It endorses a framework where nations can surveil benign activities such as simply sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

The following examples illustrate how transnational repression is already being practiced by various governments, highlighting the urgent need for a narrow scope and robust safeguards in the Convention.

Examples of Transnational Repression Documented by Human Rights Watch's Report “We Will Find You” A Global Look at How Governments Repress Nationals Abroad:

Country

Description

China

The Chinese government has been implicated in targeting political dissidents abroad through online harassment and defamation campaigns. These tactics aim to silence criticism and control the narrative internationally.

Turkey

Documented instances of Turkey misusing INTERPOL’s Red Notice system to target political opponents abroad. This misuse extends to other multilateral tools, increasing the risk of transnational repression.

Rwanda

Authorities targeted thousands of activists, journalists, and politicians using NSO Group’s Pegasus spyware. This surveillance extends to those living abroad, creating a pervasive sense of fear and threat among the diaspora.

Saudi Arabia

Government agents infiltrated Twitter to spy on dissidents. Similarly, Saudi authorities have been known to use other platforms to gather information on critics, exacerbating the risks faced by activists both domestically and internationally.

Ethiopia

Surveillance follows political refugees abroad, with Ethiopian authorities using commercial spyware to target family members of dissidents living in the UK, thereby exerting pressure on the individuals in exile.

Examples of Arbitrary, Illegitimate and Disproportionate Laws that Could Trigger Surveillance and International Cooperation

Country

Description

Russia Following the 2023 Supreme Court decision designating the “international LGBT movement” as extremist, arbitrary prosecutions for activities such as displaying the rainbow flag or wearing rainbow-colored accessories have occurred, with penalties up to four years in prison for repeat offenses. Under Article 35’s provisions, Russia could request other countries to surveil and track LGBTQ+ individuals in real time, treating their expressions of identity as serious crimes.
Egypt In 2017, during a concert where attendees waved rainbow flags, numerous individuals were arrested, with some sentenced to six years in prison for "debauchery" and "inciting debauchery." Cybercrime Law No. 175/2018 contains broad provisions to silence dissent and target LGBTQ+ individuals. Articles 25 and 26 have been used to prosecute "violations of family values," and other forms of online expression.
Thailand It is a crime of lèse-majesté to defame, insult, or threaten members of the royal family, carrying a maximum penalty of 15 years in prison. This law has been used to target activists. Thailand could request assistance from its allies to track down and intercept communications of their nationals criticizing the monarch, even while traveling or living abroad.
Jordan The pre-existing cybercrime law has been used against LGBTQ+ people, and the new Cybercrime Law of 2023 expands its capacity to do so. With overly broad and vaguely defined terms, this law will severely restrict individual human rights and will become a tool for prosecuting innocent individuals for their online speech.
Saudi Arabia Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online. Authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.”
Tunisia Decree-Law No. 54 (2022) has been used to prosecute media and individuals for "false news," information that harms “public security,” and opposition to government policies, mandating a five-year prison sentence. The first criminal investigation saw the arrest of student Ahmed Hamada for reporting on law enforcement clashes. In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets.
United
Arab Emirates
Federal Decree Law No. 34 of 2021 replaces an older law used to stifle dissent, such as sentencing human rights defender Ahmed Mansoor to 10 years in prison. Article 22 mandates prison sentences for sharing unauthorized information online, further restricting the already heavily-monitored online space and making it harder for ordinary citizens, as well as journalists and activists, to share information.

The inclusion of these examples underscores the importance of ensuring that the UN Cybercrime Convention incorporates robust human rights safeguards to prevent its misuse as a tool for transnational repression. The international community must prioritize the protection of fundamental rights and freedoms in the drafting and implementation of this Convention. 

Surveillance Defense for Campus Protests

The recent wave of protests calling for peace in Palestine have been met with unwarranted and aggressive suppression from law enforcement, universities, and other bad actors. It’s clear that the changing role of surveillance on college campuses exacerbates the dangers faced by all of the communities colleges are meant to support, and only serves to suppress lawful speech. These harmful practices must come to an end, and until they do, activists should take precautions to protect themselves and their communities. There are no easy or universal answers, but here we outline some common considerations to help guide campus activists.

Protest Pocket Guide

How We Got Here

Over the past decade, many campuses have been building up their surveillance arsenal and inviting a greater police presence on campus. EFF and fellow privacy and speech advocates have been clear that this is a dangerous trend that chills free expression and makes students feel less safe, while fostering an adversarial and distrustful relationship with the administration.

Many tools used on campuses overlap with the street-level surveillance used by law enforcement, but universities are in a unique position of power over students being monitored. For students, universities are not just their school, but often their home, employer, healthcare provider, visa sponsor, place of worship, and much more. This reliance heightens the risks imposed by surveillance, and brings it into potentially every aspect of students’ lives.

Putting together a security plan is an essential first step to protect yourself from surveillance.

EFF has also been clear for years: as campuses build up their surveillance capabilities in the name of safety, they chill speech and foster a more adversarial relationship between students and the administration. Yet, this expansion has continued in recent years, especially after the COVID-19 lockdowns.

This came to a head in April, when groups across the U.S. pressured their universities to disclose and divest their financial interest in companies doing business in Israel and weapons manufacturers, and to distance themselves from ties to the defense industry. These protests echo similar campus divestment campaigns against the prison industry in 2015, and the campaign against apartheid South Africa in the 1980s. However, the current divestment movement has been met with disroportionate suppression and unprecedented digital surveillance from many universities.

This guide is written with those involved in protests in mind. Student journalists covering protests may also face digital threats and can refer to our previous guide to journalists covering protests.

Campus Security Planning

Putting together a security plan is an essential first step to protect yourself from surveillance. You can’t protect all information from everyone, and as a practical matter you probably wouldn’t want to. Instead, you want to identify what information is sensitive and who should and shouldn’t have access to it.

That means this plan will be very specific to your context and your own tolerance of risk from physical and psychological harm. For a more general walkthrough you can check out our Security Plan article on Surveillance Self-Defense. Here, we will walk through this process with prevalent concerns from current campus protests.

What do I want to protect?

Current university protests are a rapid and decentralized response to what the UN International Court of Justice ruled as a plausible case of genocide in Gaza, and to the reported humanitarian crisis in occupied East Jerusalem and the West Bank. Such movements will need to focus on secure communication, immediate safety at protests, and protection from collected data being used for retaliation—either at protests themselves or on social media.

At a protest, a mix of visible and invisible surveillance may be used to identify protesters. This can include administrators or law enforcement simply attending and keeping notes of what is said, but often digital recordings can make that same approach less plainly visible. This doesn't just include video and audio recordings—protesters may also be subject to tracking methods like face recognition technology and location tracking from their phone, school ID usage, or other sensors. So here, you want to be mindful of anything you say or anything on your person, which can reveal your identity or role in the protest, or those of fellow protestors.

This may also be paired with online surveillance. The university or police may monitor activity on social media, even joining private or closed groups to gather information. Of course, any services hosted by the university, such as email or WiFi networks, can also be monitored for activity. Again, taking care of what information is shared with whom is essential, including carefully separating public information (like the time of a rally) and private information (like your location when attending). Also keep in mind how what you say publicly, even in a moment of frustration, may be used to draw negative attention to yourself and undermine the cause.

However, many people may strategically use their position and identity publicly to lend credibility to a movement, such as a prominent author or alumnus. In doing so they should be mindful of those around them in more vulnerable positions.

Who do I want to protect it from?

Divestment challenges the financial underpinning of many institutions in higher education. The most immediate adversaries are clear: the university being pressured and the institutions being targeted for divestment.

However, many schools are escalating by inviting police on campus, sometimes as support for their existing campus police, making them yet another potential adversary. Pro-Palestine protests have drawn attention from some federal agencies, meaning law enforcement will inevitably be a potential surveillance adversary even when not invited by universities.

With any sensitive political issue, there are also people who will oppose your position. Others at the protest can escalate threats to safety, or try to intimidate and discredit those they disagree with. Private actors, whether individuals or groups, can weaponize surveillance tools available to consumers online or at a protest, even if it is as simple as video recording and doxxing attendees.

How bad are the consequences if I fail?

Failing to protect information can have a range of consequences that will depend on the institution and local law enforcement’s response. Some schools defused campus protests by agreeing to enter talks with protesters. Others opted to escalate tensions by having police dismantle encampments and having participants suspended, expelled, or arrested. Such disproportionate disciplinary actions put students at risk in myriad ways, depending how they relied on the institution. The extent to which institutions will attempt to chill speech with surveillance will vary, but unlike direct physical disruption, surveillance tools may be used with less hesitation.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights.

All interactions with law enforcement carry some risk, and will differ based on your identity and history of police interactions. This risk can be mitigated by knowing your rights and limiting your communication with police unless in the presence of an attorney. 

How likely is it that I will need to protect it?

Disproportionate disciplinary actions will often coincide with and be preceded by some form of surveillance. Even schools that are more accommodating of peace protests may engage in some level of monitoring, particularly schools that have already adopted surveillance tech. School devices, services, and networks are also easy targets, so try to use alternatives to these when possible. Stick to using personal devices and not university-administered ones for sensitive information, and adopt tools to limit monitoring, like Tor. Even banal systems like campus ID cards, presence monitors, class attendance monitoring, and wifi access points can create a record of student locations or tip off schools to people congregating. Online surveillance is also easy to implement by simply joining groups on social media, or even adopting commercial social media monitoring tools.

Schools that invite a police presence make their students and workers subject to the current practices of local law enforcement. Our resource, the Atlas of Surveillance, gives an idea of what technology local law enforcement is capable of using, and our Street-Level Surveillance hub breaks down the capabilities of each device. But other factors, like how well-resourced local law enforcement is, will determine the scale of the response. For example, if local law enforcement already have social media monitoring programs, they may use them on protesters at the request of the university.

Bad actors not directly affiliated with the university or law enforcement may be the most difficult factor to anticipate. These threats can arise from people who are physically present, such as onlookers or counter-protesters, and individuals who are offsite. Information about protesters can be turned against them for purposes of surveillance, harassment, or doxxing. Taking measures found in this guide will also be useful to protect yourself from this potentiality.

Finally, don’t confuse your rights with your safety. Even if you are in a context where assembly is legal and surveillance and suppression is not, be prepared for it to happen anyway. Legal protections are retrospective, so for your own safety, be prepared for adversaries willing to overstep these protections.

How much trouble am I willing to go through to try to prevent potential consequences?

There is no perfect answer to this question, and every individual protester has their own risks and considerations. In setting this boundary, it is important to communicate it with others and find workable solutions that meet people where they’re at. Being open and judgment-free in these discussions make the movement being built more consensual and less prone to abuses.  Centering consent in organizing can also help weed out bad actors in your own camp who will raise the risk for all who participate, deliberately or not.

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. 

Sometimes a surveillance self-defense tactic will invite new threats. Some universities and governments have been so eager to get images of protesters’ faces they have threatened criminal penalties on people wearing masks at gatherings. These new potential charges must now need to be weighed against the potential harms of face recognition technology, doxxing, and retribution someone may face by exposing their face.

Privacy is also a team sport. Investing a lot of energy in only your own personal surveillance defense may have diminishing returns, but making an effort to educate peers and adjust the norms of the movement puts less work on any one person has a potentially greater impact. Sharing resources in this post and the surveillance self-defense guides, and hosting your own workshops with the security education companion, are good first steps.

Who are my allies?

Cast a wide net of support; many members of faculty and staff may be able to provide forms of support to students, like institutional knowledge about school policies. Many school alumni are also invested in the reputation of their alma mater, and can bring outside knowledge and resources.

A number of non-profit organizations can also support protesters who face risks on campus. For example, many campus bail funds have been set up to support arrested protesters. The National Lawyers Guild has chapters across the U.S. that can offer Know Your Rights training and provide and train people to become legal observers (people who document a protest so that there is a clear legal record of civil liberties’ infringements should protesters face prosecution).

Many local solidarity groups may also be able to help provide trainings, street medics, and jail support. Many groups in EFF’s grassroots network, the Electronic Frontier Alliance, also offer free digital rights training and consultations.

Finally, EFF can help victims of surveillance directly when they email info@eff.org or Signal 510-243-8020. Even when EFF cannot take on your case, we have a wide network of attorneys and cybersecurity researchers who can offer support.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea.

Tips and Resources

Keep in mind that nearly any electronic device you own can be used to track you, but there are a few steps you can take to make that data collection more difficult. To prevent tracking, your best option is to leave all your devices at home, but that’s not always possible, and makes communication and planning much more difficult. So, it’s useful to get an idea of what sorts of surveillance is feasible, and what you can do to prevent it. This is meant as a starting point, not a comprehensive summary of everything you may need to do or know:

Prepare yourself and your devices for protests

Our guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We have a handy printable version available here, too, that makes it easy to share with others.

Beyond preparing according to your security plan, preparing plans with networks of support outside of the protest is a good idea. Tell friends or family when you plan to attend and leave, so that if there are arrests or harassment they can follow up to make sure you are safe. If there may be arrests, make sure to have the phone number of an attorney and possibly coordinate with a jail support group.

Protect your online accounts

Doxxing, when someone exposes information about you, is a tactic reportedly being used on some protesters. This information is often found in public places, like "people search" sites and social media. Being doxxed can be overwhelming and difficult to control in the moment, but you can take some steps to manage it or at least prepare yourself for what information is available. To get started, check out this guide that the New York Times created to train its journalists how to dox themselves, and Pen America's Online Harassment Field Manual

Compartmentalize

Being deliberate about how and where information is shared can limit the impact of any one breach of privacy. Online, this might look like using different accounts for different purposes or preferring smaller Signal chats, and offline it might mean being deliberate about with whom information is shared, and bringing “clean” devices (without sensitive information) to protests.

Be mindful of potential student surveillance tools 

It’s difficult to track what tools each campus is using to track protesters, but it’s possible that colleges are using the same tricks they’ve used for monitoring students in the past alongside surveillance tools often used by campus police. One good rule of thumb: if a device, software, or an online account was provided by the school (like an .edu email address or test-taking monitoring software), then the school may be able to access what you do on it. Likewise, remember that if you use a corporate or university-controlled tool without end-to-end encryption for communication or collaboration, like online documents or email, content may be shared by the corporation or university with law enforcement when compelled with a warrant. 

Know your rights if you’re arrested: 

Thousands of students, staff, faculty, and community members have been arrested, but it’s important to remember that the vast majority of the people who have participated in street and campus demonstrations have not been arrested nor taken into custody. Nevertheless, be careful and know what to do if you’re arrested.

The safest bet is to lock your devices with a pin or password, turn off biometric unlocks such as face or fingerprint, and say nothing but to assert your rights, for example, refusing consent to a search of your devices, bags, vehicles, or home. Law enforcement can lie and pressure arrestees into saying things that are later used against them, so waiting until you have a lawyer before speaking is always the right call.

Barring a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Law enforcement may not respect your rights when they’re taking you into custody, but your lawyer and the courts can protect your rights later, especially if you assert them during the arrest and any time in custody.

EU Council Presidency’s Last-Ditch Effort For Mass Scanning Must Be Rejected 

Par : Joe Mullin
6 juin 2024 à 16:43

As the current leadership of the EU Council enters its final weeks, it is debating a dangerous proposal that could lead to scanning the private files of billions of people. 

EFF strongly opposes this proposal, put forward by the Belgian Presidency at the EU Council, which is part of the EU’s executive branch. Together with European Digital Rights (EDRi) and other groups that defend encryption, we have sent an open letter to the EU Council explaining the dangers of the proposal. The letter asks Ministers in the Council of the EU to reject all proposals that are inconsistent with end-to-end encryption, including surveillance technologies like client-side scanning. 

The Belgian proposal was debated behind closed doors, and civil society groups have only recently been able to even evaluate and discuss the proposal after it was leaked to the press

Users who don’t agree to the scanning will be forbidden from sharing images or links.

If the proposal is adopted, it would represent a significant step backwards. Since 2022, the EU has been debating a file-scanning regulation that would eviscerate end-to-end encryption. Realizing that this system of client-side scanning, which some have called “chat control,” would violate the human rights of EU residents, a key European Parliament committee agreed in November to amendments that would protect end-to-end encryption. 

How We Got Here

EFF’s advocacy has always defended the right to have a private conversation online, and the technology that can enable that: end-to-end encryption. That’s why, since 2022, we have opposed the efforts by some EU officials to put a backdoor into encrypted communications, in the name of protecting children online. 

TAKE ACTION

SIGN THE PETITION: STOP SCANNING ME!

Without major changes, the child protection proposal would have been a disaster for privacy and security online. In November, we won a victory when the EU Parliament’s civil liberties agreed to make big changes to the proposal that would make it clear that states could not engage in mass scanning of files, photos and messages in the name of fighting crime. 

The Belgian proposal, which EFF has reviewed, specifies that online services would be forced to install software so that child abuse material “should remain detectable in all interpersonal communications services.” To do this, the online services must apply “vetted technology”—in other words, government-approved software—that would allow law enforcement to scan the photos, messages and files of any user. 

The proposal actually goes on to suggest that users should be asked to “give explicit consent” for this invasion of privacy. Users who don’t agree to the scanning will be forbidden from sharing images or links. The idea of whitewashing mass surveillance with a government-approved “click-through” agreement, and banning users from basic internet functionality if they don’t agree, sounds like a dystopian novel—but it’s being seriously debated. 

We reject mass-scanning as a means of public safety. Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. Government eavesdropping in the name of crime-fighting must always be targeted, narrowly limited, and subject to judicial oversight. 

The Belgian Presidency’s proposal is the latest in a long line of attempts by governments to evade this basic human rights concept. As its details become more widely known, this colossally unpopular spying idea will be rejected not just by EFF and other NGOs, but by voting publics in the EU and beyond. 

Security, Surveillance, and Government Overreach – the United States Set the Path but Canada Shouldn’t Follow It

The Canadian House of Commons is currently considering Bill C-26, which would make sweeping amendments to the country’s Telecommunications Act that would expand its Minister of Industry’s power over telecommunication service providers. It’s designed to accomplish a laudable and challenging goal: ensure that government and industry partners efficiently and effectively work together to strengthen Canada’s network security in the face of repeated hacking attacks.

C-26 is not identical to US national security laws. But without adequate safeguards, it could open the door to similar practices and orders.

As researchers and civil society organizations have noted, however, the legislation contains vague and overbroad language that may invite abuse and pressure on ISPs to do the government’s bidding at the expense of Canadian privacy rights. It would vest substantial authority in Canadian executive branch officials to (in the words of C-26’s summary) “direct telecommunications service providers to do anything, or refrain from doing anything, that is necessary to secure the Canadian telecommunications system.” That could include ordering telecommunications companies to install backdoors inside encrypted elements in Canada’s networksSafeguards to protect privacy and civil rights are few; C-26’s only express limit is that Canadian officials cannot order service providers to intercept private or radio-based telephone communications.

Unfortunately, we in the United States know all too well what can happen when government officials assert broad discretionary power over telecommunications networks. For over 20 years, the U.S. government has deputized internet service providers and systems to surveil Americans and their correspondents, without meaningful judicial oversight. These legal authorities and details of the surveillance have varied, but, in essence, national security law has allowed the U.S. government to vacuum up digital communications so long as the surveillance is directed at foreigners currently located outside the United States and doesn’t intentionally target Americans. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches to find Americans’ communications.

Congress has attempted to add in additional safeguards over the years, to little avail. In 2023, for example, the Federal Bureau of Investigation (FBI) released internal documents used to guide agency personnel on how to search the massive databases of information they collect. Despite reassurances from the intelligence community about its “culture of compliance,” these documents reflect little interest in protecting privacy or civil liberties. At the same time, the NSA and domestic law enforcement authorities have been seeking to undermine the encryption tools and processes on which we all rely to protect our privacy and security.

C-26 is not identical to U.S. national security laws. But without adequate safeguards, it could open the door to similar practices and orders. What is worse, some of those orders could be secret, at the government’s discretion. In the U.S., that kind of secrecy has made it impossible for Americans to challenge mass surveillance in court. We’ve also seen companies presented with gag orders in connection with “national security letters” compelling them to hand over information. C-26 does allow for judicial review of non-secret orders, e.g. an order requiring an ISP to cut off an account-holder or website, if the subject of those orders believes they are unreasonable or ungrounded. But that review may include secret evidence that is kept from applicants and their counsel.

Canadian courts will decide whether a law authorizing secret orders and evidence is consistent with Canada’s legal tradition. But either way, the U.S. experience offers a cautionary tale of what can happen when a government grants itself broad powers to monitor and direct telecommunications networks, absent corresponding protections for human rights. In effect, the U.S. government has created, in the name of national security, a broad exception to the Constitution that allows the government to spy on all Americans and denies them any viable means of challenging that spying. We hope Canadians will refuse to allow their government to do the same in the name of “cybersecurity.”

Win for Free Speech! Australia Drops Global Takedown Order Case

As we put it in a blog post last month, no single country should be able to restrict speech across the entire internet. That's why EFF celebrates the news that Australia's eSafety Commissioner is dropping its legal effort to have content on X, the website formerly known as Twitter, taken down across the globe. This development comes just days after EFF and FIRE were granted official intervener status in the case. 

In April, the Commissioner ordered X to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post in Australia, but it declined to block it elsewhere. The Commissioner then asked an Australian court to order a global takedown — securing a temporary order that was not extended. EFF moved to intervene on behalf of X, and legal action was ongoing until this week, when the Commissioner announced she would discontinue Federal Court proceedings. 

We are pleased that the Commissioner saw the error in her efforts and dropped the action. Global takedown orders threaten freedom of expression around the world, create conflicting legal obligations, and lead to the lowest common denominator of internet content being available around the world, allowing the least tolerant legal system to determine what we all are able to read and distribute online. 

As part of our continued fight against global censorship, EFF opposes efforts by individual countries to write the rules for free speech for the entire world. Unfortunately, all too many governments, even democracies, continue to lose sight of how global takedown orders threaten free expression for us all. 

Car Makers Shouldn’t Be Selling Our Driving History to Data Brokers and Insurance Companies

You accelerated multiple times on your way to Yosemite for the weekend. You braked when driving to a doctor appointment. If your car has internet capabilities, GPS tracking or OnStar, your car knows your driving history.

And now we know: your car insurance carrier might know it, too.

In a recent New York Times article, Kashmir Hill reported how everyday moments in your car like these create a data footprint of your driving habits and routine that is, in some cases, being sold to insurance companies. Collection often happens through so-called “safe driving” programs pre-installed in your vehicle through an internet-connected service on your car or a connected car app. Real-time location tracking often starts when you download an app on your phone or tap “agree” on the dash screen before you drive your car away from the dealership lot.

Technological advancements in cars have come a long way since General Motors launched OnStar in 1996. From the influx of mobile data facilitating in-car navigation, to the rise of telematics in the 2010s, cars today are more internet-connected than ever. This enables, for example, delivery of emergency warnings, notice of when you need an oil change, and software updates. Recent research predicts that by 2030, more than 95% of new passenger cars will contain some form of internet-connected service and surveillance.

Car manufacturers including General Motors, Kia, Subaru, and Mitsubishi have some form of services or apps that collect, maintain, and distribute your connected car data to insurance companies. Insurance companies spend thousands of dollars purchasing your car data to factor in these “select insights” about your driving behavior. Those insights are then factored into your “risk score,” which can potentially spike your insurance premiums.

As Hill reported, the OnStar Smart Driver program is one example of an internet-connected service that collects driver data and sends it to car manufacturers. They then sell this digital driving profile to third-party data brokers, like Lexis-Nexus or Verisk. From there, data brokers generally sell information to anyone with the money to buy it. After Hill’s report, GM announced it would stop sharing data with these brokers.

The manufacturers and car dealerships subvert consumers’ authentic choice  to  participate in collecting and sharing of their driving data. This is where consumers should be extremely wary, and where we need stronger data privacy laws. As reported by Hill, a salesperson at the dealership may enroll you without your even realizing it, in their pursuit of an enrollment bonus.  All of this is further muddied by a car manufacturers’ lack of clear, detailed, and transparent “terms and conditions” disclosure forms. These are often too long to read and filled with technical legal jargon—especially when all you want is to drive your new car home. Even for unusual consumers who take the time to read the privacy disclosures, as noted in Hill’s article by researcher Jen Caltrider at the Mozilla Foundation, drivers “have little idea about what they are consenting to when it comes to data collection.”

Better Solutions

This whole process puts people in a rough situation. We are unknowingly surveilled to generate a digital footprint that companies later monetize, including details about many parts of daily life, from how we eat, to how long we spend on social media. And now, the way we drive and locations we visit with our car.

That's why EFF supports comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent.

If there were clear data minimization guardrails in place, it would curb overzealous processing of our automotive data. General Motors would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for. For example, through the OnStar program, drivers may want to provide their GPS location data to assist rescue efforts, or to automatically call 911 if they’ve been in an accident. Any car data beyond what is needed to provide services people asked for should not be collected. And it certainly shouldn't be sold to data brokers—who then sell it to your car insurance carriers.

Hill’s article shines a light on another part of daily life that is penetrated by technology advancements that have no clear privacy guardrails. Consumers do not actually know how companies are processing their data – much less actually exercise control over this processing.

That’s why we need opt-in consent rules: companies must be forbidden from processing our data, unless they first obtain our genuine opt-in consent. This consent must be informed and specific, meaning companies cannot hide the request in legal jargon buried under pages of fine print. Moreover, this consent cannot be the product of deceptively designed user interfaces (sometimes called “dark patterns”) that impair autonomy and choice. Further, this consent must be voluntary, meaning among other things it cannot be coerced with pay-for-privacy schemes. Finally, the default must be no data processing until the driver gives permission (“opt-in consent”), as opposed to processing until the driver objects (“opt-out consent”).

But today, consumers do not control, or often even know, to whom car manufacturers are selling their data. Is it car insurers, law enforcement agencies, advertisers?

Finally, if you want to figure out what your car knows about you, and opt out of sharing when you can, check out our instructions here.

Podcast Episode: AI on the Artist's Palette

Par : Josh Richman
4 juin 2024 à 03:06

Collaging, remixing, sampling—art always has been more than the sum of its parts, a synthesis of elements and ideas that produces something new and thought-provoking. Technology has enabled and advanced this enormously, letting us access and manipulate information and images in ways that would’ve been unimaginable just a few decades ago.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

For Nettrice Gaskins, this is an essential part of the African American experience: The ability to take whatever is at hand—from food to clothes to music to visual art—and combine it with life experience to adapt it into something new and original. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how she takes this approach in applying artificial intelligence to her own artwork, expanding the boundaries of Black artistic thought.  

In this episode you’ll learn about: 

  • Why making art with AI is about much more than just typing a prompt and hitting a button 
  • How hip-hop music and culture was an early example of technology changing the state of Black art 
  • Why the concept of fair use in intellectual property law is crucial to the artistic process 
  • How biases in machine learning training data can affect art 
  • Why new tools can never replace the mind of a live, experienced artist 

Dr. Nettrice R. Gaskins is a digital artist, academic, cultural critic, and advocate of STEAM (science, technology, engineering, arts, and math) fields whose work she explores "techno-vernacular creativity" and Afrofuturism. She teaches, writes, "fabs,” and makes art using algorithms and machine learning. She has taught multimedia, visual art, and computer science with high school students, and now is assistant director of the Lesley STEAM Learning Lab at Lesley University.  She was a 2021 Ford Global Fellow, serves as an advisory board member for the School of Literature, Media, and Communication at Georgia Tech, and is the author of “Techno-Vernacular Creativity and Innovation” (2021). She earned a BFA in Computer Graphics with honors from Pratt Institute in 1992; an MFA in Art and Technology from the School of the Art Institute of Chicago in 1994; and a doctorate in Digital Media from Georgia Tech in 2014.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

NETTRICE GASKINS
I just think we have a need to remix, to combine, and that's where a lot of our innovation comes from, our ability to take things that we have access to. And rather than see it as a deficit, I see it as an asset because it produces something beautiful a lot of the times. Something that is really done for functional reasons or for practical reasons, or utilitarian reasons is actually something very beautiful, or something that takes it beyond what it was initially intended to be.

CINDY COHN
That's Nettrice Gaskins. She’s a professor, a cultural critic and a digital artist who has been using algorithms and generative AI as a part of her artistic practice for years.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fray when they DO go wrong. But this show is about envisioning, and hopefully helping create, a better future.

JASON KELLEY
Our guest today is Nettrice Gaskins. She’s the assistant director of the Lesley STEAM learning lab at Lesley University and the author of Techno-Vernacular Creativity and Innovation. Her artwork has been featured by the Smithsonian, among many other institutions.

CINDY COHN
Nettrice has spoken about how her work creating art using generative AI prompts is directly related to remix culture and hip hop and collage. There’s a rich tradition of remixing to create new artworks that can be more than the sum of their parts, and – at least the way that Nettrice uses it – generative AI is another tool that can facilitate this kind of art. So we wanted to start the conversation there.

NETTRICE GASKINS
Even before hip hop, even the food we ate, um, poor people didn't have access to, you know, ham or certain things. So they used the intestines of a pig and then they created gumbo, because they had a little bit of this and a little bit of that and they found really creative and innovative ways to put it all together that is now seen as a thing to have, or have tried. So I think, you know, when you have around the world, not just in the United States, but even in places that are underserved or disenfranchised you have this, still, need to create, and to even innovate.

And I think a lot of the history of African Americans, for example, in the United States, they weren't permitted to have their own languages. But they found ways to embed it in language anyway. They found ways to embed it in the music.

So I think along the way, this idea of what we now know as remixing or sampling or collage has been there all along and this is just one other way.  I think that once you explain how generative AI works to people who are familiar with remixing and all this thing in the history, it clicks in many ways.
Because it starts to make sense that it is instead of, you know, 20 different magazines I can cut images out and make a collage with, now we're talking about thousands of different, pieces of information and data that can inform how an image is created and that it's a prediction and that we can create all these different predictions. It sounds a lot like what happens when we were looking at a bunch of ingredients in the house and realizing we had to make something from nothing and we made gumbo.

And that gumbo can take many different forms. There's a gumbo in this particular area of the country, then there's gumbo in this particular community, and they all have the same idea, but the output, the taste, the ingredients are different. And I think that when you place generative AI in that space, you're talking about a continuum. And that's kind of how I treat it when I'm working with gen AI.

CINDY COHN
I think that's so smart. And the piece of that that's important that's kind of inherent in the way you're talking about it, is that the person doing the mixing, right? The chef, right, is the one who who does the choices and who's the chef matters, right?

NETTRICE GASKINS
And also, you know, when they did collage, there's no attribution. So if you look at a Picasso work that's done collage, he didn't, you know, all the papers, newspapers that he took from, there's no list of what magazines those images came from, and you could have hundreds to 50 to four different references, and they created fair use kind of around stuff like that to protect, you know, works that are like, you know, collage or stuff from modern art.

And we're in a situation where those sources are now quadrupled, it's not even that, it's like, you know, how many times, as opposed to when we were just using paper, or photographs.

We can't look at it the same because the technology is not the same, however, some of the same ideas can apply. Anybody can do collage, but what makes collage stand out is the power of the image once it's all done. And in some cases people don't want to care about that, they just want to make collage. They don't care, they're a kid and they just want to make paper and put it together, make a greeting card and give it to mom.

Other people make some serious work, sometimes very detailed using collage, and that's just paper, we're not even talking about digital collage, or the ways we use Adobe Photoshop to layer images and create digital collages, and now Photoshop's considered to be an AI generator as well. SoI think that if we look in the whole continuum of modern art, and we look at this need to curate abstractions from things from life.

And, you know, Picasso was looking at African art, there's a way in which they abstracted that he pulled it into cubism, him and many other artists of his time. And then other artists looked at Picasso and then they took it to whatever level they took it to. But I think we don't see the continuum. We often just go by the tool or go by the process and not realize that this is really an extension of what we've done before. Which is how I view gen AI. And the way that I use it is oftentimes not just hitting a button or even just cutting and pasting. It is a real thoughtful process about ideas and iteration and a different type of collage.

CINDY COHN
I do think that this bridges over into, you know, an area where EFF does a lot of work, right, which is really making sure we have a robust Fair Use doctrine that doesn't get stuck in one technology, but really can grow because, you know we definitely had a problem with hip hop where the, kind of, over-copyright enforcement really, I think, put a damper on a lot of stuff that was going on early on.

I don't actually think it serves artists either, that we have to look elsewhere as a way to try to make sure that we're getting artists paid rather than trying to control each piece and make sure that there's a monetization scheme that's based upon the individual pieces. I don't know if you agree, but that's how I think about it.

NETTRICE GASKINS
Yeah, and I, you know, just like we can't look at collage traditionally and then look at gen AI as exactly the same. There's some principles and concepts around that I think they're very similar, but, you know, there's just more data. This is much more involved than just cutting and pasting on canvas board or whatever, that we're doing now.

You know, I grew up with hip hop, hip hop is 50 this year, I'm 53, so I was three, so hip hop is my whole life. You know, from the very beginning to, to now. And I've also had some education or some training in sampling. So I had a friend who was producing demos for, and I would sit there all night and watch him splice up, you know, different sounds. And eventually I learned how to do it myself. So I know the nature of that. I even spliced up sampled musics further to create new compositions with that.

And so I'm very much aware of that process and how it connects even from the visual arts side, which is mostly what I am as a visual artist, of being able to splice up and, and do all that. And I was doing that in 1992.

CINDY COHN
Nice.

NETTRICE GASKINS
I was trying to do it in 1987, when the first time I used Amiga and DePaint, I was trying to make collages then in addition to what I was doing in my visual arts classes outside of that. So I've always been interested in this idea, but if you look at the history of even the music, these were poor kids living in the Bronx. These were poor kids and they couldn't afford all the other things, the other kids who were well off, so they would go to the trash bins and take equipment and re-engineer it and come up with stuff that now DJs around the world are using. That people around the world are doing, but they didn't have, so they had to be innovative. They had to think outside the box. And they had to use – they weren't musicians. They didn't have access to instruments, but they did have access to was records. And they had access to, you know, discarded electronics and they were able to figure out a way to stretch out a rhythm so that people could dance to it.

They had the ability to layer sounds so that there was no gap between one album and the next, so they could continue that continuous play so that the party kept going. They found ways to do that. They didn't go to a store and buy anything that made that happen. They made it happen by tinkering and doing all kinds of things with the equipment that they had access to, which is from the garbage.

CINDY COHN
Yeah, absolutely. I mean, Grandmaster Flash and the creation of the crossfader and a lot of actual, kind of, old school hardware development, right, came out of that desire and that recognition that you could take these old records and cut them up, right? Pull the, pull the breaks and, and play them over and over again. And I just think that it's pulling on something very universal. Definitely based upon the fact that a lot of these kids didn't have access to formal instruments and formal training, but also just finding a way to make that music, make that party still go despite that, there's just something beautiful about that.

And I guess I'm, I'm hoping, you know, AI is quite a different context at this point, and certainly it takes a lot of money to build these models. But I'm kind of interested in whether you think we're headed towards a future where these foundational models or the generative AI models are ubiquitous and we'll start to see the kids of the future picking them up and building new things out of them.

NETTRICE GASKINS
I think they could do it now. I think that with the right situation where they could set up a training model and figure out what data they wanted to go into the model and then use that model and build it over time. I just think that it's the time and the space, just like the time and the space that people had to create hip hop, right?

The time and the space to get in a circle and perform together or get into a room and have a function or party. I think that it was the time. And I think that, we just need that moment in this space to be able to produce something else that's more culturally relevant than just something that's corporate.
And I think my experiences as an artist, as someone who grew up around hip-hop all my life, some of the people that I know personally are pioneers in that space of hip-hop. But also, I don't even stay in hip-hop. You know, I was talking about sashiko, man, that's a Japanese hand-stitching technique that I'm applying, remixing to. And for me to do that with Japanese people, you know, and then their first concern was that I didn't know enough about the sashiko to be going there. And then when I showed them what I knew, they were shocked. Like, when I go into, I go deep in. And so they were very like, Oh, okay. No, she knows.

Sashiko is a perfect example. If you don't know about sashiko embroidery and hand stitching, there were poor people and they wanted to stretch out the fabrics and the clothing for longer because they were poor. So they figure out ways to create these intricate stitching patterns that reinforced the fabric so that it would last longer because they were poor. And then they would do patches, like patchwork quilts and they it was both a quilting and embroidery technique for poor people, once again, using what they had.

When we think about gumbo, here's another situation of people who didn't have access to fancy clothing or fancy textiles, but found a way. And then the work that they did was beautiful. Aesthetically, it was utilitarian in terms of why they did it. But now we have this entire cultural art form that comes out of that, that's beautiful.

And I think that's kind of what has happened along the way. You know, we are, just like there are gatekeepers in the art world so the Picassos get in, but not necessarily. You know, I think about Romare Bearden, who did get into some of the museums and things. But most people, they know of Picasso, but they don't know about Romare Bearden who decided to use collage to represent black life.

But I also feel like, we talk about equity, and we talk about who gets in, who has the keys. Where the same thing occurs in generative AI. Or just AI in general, I don't know, the New York Times had an article recently listed all the AI pioneers and no women were involved, it was just men. And then so it was a Medium article, here were 13, 15 women you could have had in your list. Once again, we see it again, where people are saying who holds the keys. These are the people that hold the keys. And in some cases, it's based on what academic institution you're at.

So again, who holds the keys? Even in the women who are listed. MITs, and the Stanfords, and somewhere out there, there's an AI innovator who isn't in any of those institutions, but is doing some cool things within a certain niche, you know, so we don't hear those stories, but there's not even opening to explore that, that person who wrote and just included those men didn't even think about women, didn't even think about the other possibilities of who might be innovating in space.

And so we continue to have this year in and year out every time there's a new change in our landscape, we still have the same kinds of historical omissions that have been going on for many years.

JASON KELLEY
Could we lift up some of the work that you have, have been doing and talk about like the specific process or processes that you've used? How do you actually use this? 'Cause I think a lot of people probably that listen, just know that you can go to a website and type in a prompt and get an image, and they don't know about, like, training it, how you can do that yourself and how you've done it. So I'm wondering if you could talk a little bit about your specific process.

NETTRICE GASKINS
So, I think, you know, people were saying, especially maybe two years ago, that my color scheme was unusually advanced for just using Gen AI. Well, I took two semesters of mandatory color theory in college.

So I had color theory training long before this stuff popped up. I was a computer graphics major, but I still had to take those classes. And so, yeah, my sense of color theory and color science is going to be strong because I had to do that every day as a freshman. And so that will show up.

I've had to take drawing, I've had to take painting. And a lot of those concepts that I learned as an art student go into my prompts. So that's one part of it. I'm using colors. I know the compliment. I know the split compliments.

I know the interactions between two colors that came from training, from education, of being in the classroom with a teacher or professor, but also, like one of my favorite books is Cane by an author named Jean Toomer. He only wrote one book, but it's a series of short stories. I love it. It's so visual. The way he writes is so visual. So I started reinterpreting certain aspects of some of my favorite stories from that book.

And then I started interpreting some of those words and things and concepts and ideas in a way that I think the AI can understand, the generator can understand.

So another example would be Maya Angelou's Phenomenal Woman. There's this part of the poem that talks about oil wells and how, you know, one of the lines. So when I generated my interpretation of that part of the poem, the oil wells weren't there, so I just extended using, in the same generator, my frame and set oil wells and drew a box: In this area of my image, I want you to generate oil wells.

And then I post it and people have this reaction, right? And then I actually put the poem and said, this is Midjourney. It's reinterpretation is not just at the level of reinterpreting the image and how that image like I want to create like a Picasso.

I don't, I don't want my work to look like Picasso at all or anybody. I want my work to look like the Cubist movement mixed with the Fauvists mixed with the collages mixed with this, with … I want a new image to pop up. I want to see something brand new and that requires a lot of prompting, a lot of image prompting sometimes, a lot of different techniques.

And it's a trial and error kind of thing until you kind of find your way through. But that's a creative process. That's not hitting a button. That's not cutting and pasting or saying make this look like Picasso. That's something totally different.

JASON KELLEY
Let’s take a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Nettrice Gaskins.

The way Nettrice talks about her artistic process using generative AI makes me think of that old cliche about abstract art – you know, how people say 'my kid could paint that.' There's a misconception now with Gen AI that people assume you just pop in a few words and boom, you get a piece of art. Sometimes that’s true, but Nettrice's approach goes far beyond a simple prompt.

NETTRICE GASKINS
Well, I did a talk recently, and it may have been for the Philadelphia Museum of Art. I did a lecture and the Q& A, they said, could you just demo? What you do, you have some time. And I remember after I demoed, they said, Oh, that definitely isn't hitting a button. That is much more, now I feel like I should go in there.

And a lot of times people come away, They're feeling like, now I really want to get in there, And see what I can do. Cause it isn't. I was showing, you know, in what, 30 seconds to a minute, basically how I generate images, which is very different than, you know, what they might think. And that was just within Midjourney. Another reason why personally that I got into on the prompt side before it was image style transfer, it was deep style. It wasn't prompt based. So it was about applying a style to. an image. Now you can apply many styles to one image. But then it was like, apply a style to this photo. And I spent most of my time in generative AI doing that until 2021, with DALL-E and Midjourney.

So before that, there were no prompts, it was just images. But then a lot came from that. The Smithsonian show came from that earlier work. It was like right on the edge of DALL-E and all that stuff coming. But I feel like, you know, my approach even then was somehow I didn't see images that reflected me or reflected, um, the type of images I wanted to see.

So that really propelled me into going into generative AI from the image style, applying styles to, for example, there's something if you're in a computer graphics major or you do computer graphics development or CGI, you may know a lot of people would know something called subsurface scattering.
And subsurface scattering is an effect people apply to skin. It's kind of like a milk, it's called glow. It's very well known, you texture and model your, your person based on that. However, it dulls dark skin tones. And if you look at photography and all the years with film and all that stuff, we have all these examples of where things were calibrated a certain way, not quite for darker skin tones. Here we are again, this time with, but there's something called specular reflection or shine, but apparently when applied, it brings up and enhances darker skin tones. So I wondered if I could apply, using neural image style transfer or deep style, if I could apply that shine or subsurface scattering to my photographs and create portraits of darker skin tones that enhanced features.

Well that succeeded. It worked. And I was just using 18th century tapestries that had metallics in them. So they have gold or they, you know, they had that shine in it as the style applied.

CINDY COHN
Ah.

NETTRICE GASKINS
So one of those, I did a bunch of series of portraits called the gilded series. And around the time I was working on that and exploring that, um, Greg Tate, the cultural critic and writer, Greg Tate, passed away in 2021 and, um, I did a portrait. I applied my tapestry, the style, and it was a selfie he had taken of himself. So it wasn't like it was from a magazine or anything like that. And then I put it on social media and immediately his family and friends reached out.
So now it's a 25 foot mural in Brooklyn.

CINDY COHN
Wow.

JASON KELLEY
It's beautiful. I was looking at it earlier. We'll link to it.

CINDY COHN
Yeah, I’ve seen it too.

NETTRICE GASKINS
And that was not prompt based, that's just applying some ideas around specular reflection and it says from the Gilded Series on the placard. But that is generative AI. And that is remixing. Some of that is in Photoshop, and I Photoshopped, and some of that is three different outputs from the generator that were put together and combined in Photoshop to make that image.

And when it's nighttime, because it has metallics in there, there's a little bit of a shine to the images. When I see people tag me, if they're driving by in the car, you see that glow. I mean, you see that shine, and it, it does apply. And that came from this experimenting with an idea using generative AI.

CINDY COHN
So, and when people are thinking about AI right now, you know, we've really worked hard and EFF has been part of this, but others as well, is to put the threat of bias and bias kind of as something we also have to talk about because it's definitely been historically a problem with, uh, AI and machine learning systems, including not recognizing black skin.

And I'm wondering as somebody who's playing with this a lot, how do you think about the role bias plays and how to combat it. And I think your stories kind of do some of this too, but I'd love to hear how you think about combating bias. And I have a follow up question too, but I want to start with that.

NETTRICE GASKINS
Yeah, some of the presentations I've done, I did a Power of Difference for Bloomberg, was talking to the black community about generative AI. There was a paper I read a month or two ago, um, they did a study for all the main popular AI generators, like Stable Diffusion, Midjourney, DALL-E, maybe another, and they did an experiment to show bias, to show why this is important, and one of the, the prompt was portrait, a portrait of a lawyer. And they did it in all, and it was all men...

CINDY COHN
I was going to say it didn't look like me either. I bet.

NETTRICE GASKINS
I think it was DALL-E was more diverse. So all men, but it was like a black guy. It was like, you know, they were all, and then there was like a racially ambiguous guy. And, um, was it Midjourney, um, for Deep Dream Generator, it was just a black guy with a striped shirt.

But for Portrait of a Felon. Um, Midjourney had kind of a diverse, still all men, but for kind of more diverse, racially ambiguous men. But DALL-E produced three apes and a black man. And so my comment to the audience or to listeners is, we know that there's history in Jim Crow and before that about linking black men, black people to apes. Somehow that's in the, that was the only thing in the prompt portrait of a felon and there are three apes and a black man. How do apes play into "felon?" The connection isn't "felon," the connection is the black man, and then to the apes. That's sitting somewhere and it easily popped up.

And there’s been scary stuff that I've seen in Midjourney, for example. And I'm trying to do a blues musician and it gives me an ape with a guitar. So it's still, you know, and I said, so there's that, and it's still all men, right?

So then because I have a certain particular knowledge, I do know of a lawyer who was Constance Baker Motley. So I did a portrait of Constance Baker Motley, but you would have to know that. If I'm a student or someone, I don't know any lawyers and I do portrait of a lawyer for an assignment or portrait of whatever, who knows what might pop up and then how do I process that?

We see bias all the time. I could, because of who I am, and I know history, I know why the black man and the apes or animals popped up for "felon," but it still happened, and we still have this reality. And so to offset that one of the things is, has it needed, in order to offset some of that is artists or user intervention.
So we intervene by changing the image. Thumbs up, thumbs down. Or we can, in the prediction, say, this is wrong. This is not the right information. And eventually it trains the model not to do that. Or we can create a Constance Baker Motley, you know, of our own to offset that, but we would have to have that knowledge first.

And a lot of people don't have that knowledge first. I can think of a lawyer off the top, you know, that's a black woman that, you know, is different from what I got from the AI generators. But if that intervention right now is key, and then we gotta have more people who are looking at the data, who are looking at the data sources, and are also training the model, and more ways for people from diverse groups to train the model, or help train the model, so we get better results.

And that hasn't, that usually doesn't happen. These happen easily. And so that's kind of my answer to that.

CINDY COHN
One of the stories that I've heard you tell is about the, working with these dancers in Trinidad and training up a model of the Caribbean dancers. And I'm wondering if one of the ways you think about addressing bias is, I guess, same with your lawyer story, is like sticking other things into the model to try to give it a broader frame than it might otherwise have, or in the training data.

But I'm, I'm wondering if that's something you do a lot of, and, and I, I might ask you to tell that story about the dancers, because I thought it was cool.

NETTRICE GASKINS
That was the Mozilla Foundation sponsored project for many different artists and technologists to interrogate AI - Generative AI specifically, but AI in general. And so we did choose, 'cause two of my theme, it was a team of three women, me and two other women. One's a dancer, one's an architect, but we, those two women are from the Caribbean.

And so because during the lockdown there was no festival, there was no carnival, a lot of people, across those cultures were doing it on Zoom. So we're having Zoom parties. So we just had Zoom parties with the data we were collecting. We were explaining generative AI and what we were doing, how it worked to the Caribbean community.

CINDY COHN
Nice.

NETTRICE GASKINS
And then we would put the music on and dance, so we were getting footage from the people who are participating. And then using PoseNet and machine learning to produce an app that allows you to dance with yourself, mini dancer, or to dance with shapes and, or create color painting with movement that was colors with colors from Carnival.

And one of the members, Vernelle Noel, she was using GAN, Generative Adversarial Networks to produce costuming, um, that you might see, but in really futuristic ways, using GAN technology. So different ways we could do that. We explored that with the project.

CINDY COHN
One of the things that, again, I'm kind of feeding you stuff back from yourself because I found it really interesting as you're talking about, like, using these tools in a liberatory way for liberation, as opposed to surveillance and control. And I wondered if you have some thoughts about how best to do that, like what are the kinds of things you look for in a project to try to see whether it's really based in liberation or based in kind of surveillance and monitoring and control, because that's been a long time issue, especially for people from majority countries.

NETTRICE GASKINS
You know, we were very careful with the data from the Carnival project. We said after a particular set period of time, we would get rid of the data. We were only using it for this project for a certain period of time, and we have, you know, signed, everyone signed off on that, including the participants.
Kind of like IRB if you're an academic, and in some cases, and one, Vernelle, was an academic. So it was done through her university. So there was IRB involved, but, um, I think it was just an art. Uh, but we want to be careful with data. Like we wanted people to know we're going to collect this and then we're going to get rid of it once we, you know, do what we need to do.

And I think that's part of it, but also, you know, people have been doing stuff with surveillance technology for a good minute. Um, artists have been doing, um, statements using surveillance technology. Um, people have been making music. There's a lot of rap music and songs about surveillance. Being watched and you know, I did a in Second Life, I did a wall of eyes that follow you everywhere you go...

CINDY COHN
Oof.

NETTRICE GASKINS
...to curate the feeling of always being watched. And for people who don't know what that's like it created that feeling in them as avatars they were like why am I being watched and I'm like this is you at a, if you're black at a grocery store, if you go to Neiman Marcus, you know go to like a fancy department store. This might be what you feel like. I'm trying to simulate that in virtual 3D was a goal.

I'm not so much trying to simulate. I'm trying to, here's another experience. There are people who really get behind the idea that you're taking from other people's work. And that that is the danger. And some people are doing that. I don't want to say that that's not the case. There are people out there who don't have a visual vocabulary, but want to get in here. And they'll use another person's artwork or their name to play around with tools. They don't have an arts background. And so they are going to do that.

And then there are people like me who want to push the boundaries. And want to see what happens when you mix different tools and do different things. And they never, those people who say that you're taking other people's work, I say opt out. Do that. I still continue because a lot of the work that, there's been so lack of representation from artists like me in the spaces, even if you opt out, it doesn't change my process at all.

And that says a lot about gatekeepers, equity, you know, representation and galleries and museums and all that thing are in certain circles for digital artists like Deviant, you know, it just, it doesn't get at some of the real gray areas around this stuff.

CINDY COHN
I think there's something here about people learning as well, where, you know, young musicians start off and they want to play like Beethoven, right? But at some point you find your own, you need to find your own voice. And that, that, that to me is the, you know, obviously there are people who are just cheaters who are trying to pass themselves off as somebody else and that matters and that's important.

But there's also just this period of, I think, artistic growth, where you kind of start out trying to emulate somebody who you admire, and then through that process, you kind of figure out your own voice, which isn't going to be just the same.

NETTRICE GASKINS
And, you know, there was some backlash over a cover that I had done for a book. And then they went, when the publisher came back, they said, where are your sources? It was a 1949 photograph of my mother and her friends. It has no watermark. So we don't know who took the photo. And obviously, from 1949, it's almost in the public domain, it's like, right on the edge.

CINDY COHN
So close!

NETTRICE GASKINS
But none of those people live anymore. My mom passed in 2018. So I use that as a source. My mom, a picture of my mom from a photo album. Or something from, if it's a client, they pay for licensing of particular stock photos. In one case, I used three stock photos because we couldn't find a stock photo that represented the character of the book.

So I had to do like a Frankenstein of three to create that character. That's a collage. And then that was uploaded to the generator, after that, to go further.
So yeah, I think that, you know, when we get into the backlash, a lot of people think, this is all you're doing. And then when I open up the window and say, or open up the door and say, look at what I'm doing - Oh, that's not what she was doing at all!

That's because people don't have the education and they're hearing about it in certain circles, but they're not realizing that this is another creative process that's new and it's entering our world that people can reject or not.

Like, people will say digital photography is going to take our jobs. Really, the best photography comes from being in a darkroom. And going through the process with the enlarger and the chemicals. That's the true photography. Not what you do in these digital cameras and all that stuff and using software, that's not real photography. Same kind of idea but here we are talking about something else. But very, very similar reaction.

CINDY COHN
Yeah, I think people tend to want to cling to the thing that they're familiar with as the real thing, and a little slow sometimes to recognize what's going on. And what I really appreciate about your approach is you're really using this like a tool. It's a complicated process to get a really cool new paintbrush that people can create new things with.

And I want to make sure that we're not throwing out the babies with the bathwater as we're thinking about this. And I also think that, you know, my hope and my dream is that in our, in our better technological future, you know, these tools will be far more evenly distributed than say some of the earlier tools, right?
And you know, Second Life and, and things like that, you know, were fairly limited by who could have the financial ability to actually have access. But we have broadened that aperture a lot, not as far as it needs to go now. And so, you know, part of my dream for a better tech future is that these tools are not locked away and only people who have certain access and certain credentials get the ability to use them.

But really, we broaden them out. That, that points towards more open models, open foundational models, as well as, um, kind of a broader range of people being able to play with them because I think that's where the cool stuff's gonna probably come from. That's where the cool stuff has always come from, right?

It hasn't come from the mainstream corporate business model for art. It's come from all the little nooks and crannies where the light comes in.

NETTRICE GASKINS
Yeah. Absolutely.

CINDY COHN
Oh Nettrice, thank you so much for sharing your vision and your enthusiasm with us. This has just been an amazing conversation.

NETTRICE GASKINS
Thanks for having me.

JASON KELLEY
What an incredible conversation to have, in part because, you know, we got to talk to an actual artist about their process and learn that, well, I learned that I know nothing about how to use generative AI and that some people are really, really talented and it comes from that kind of experience, and being able to really build something, and not just write a sentence and see what happens, but have an intention and a, a dedicated process to making art.

And I think it's going to be really helpful for more people to see the kind of art that Nettrice makes and hear some of that description of how she does it.

CINDY COHN
Yeah. I think so too. And I think the thing that just shines clear is that you can have all the tools, but you need the artist. And if you don't have the artist with their knowledge and their eye and their vision, then you're not really creating art with this. You may be creating something, something you could use, but you know, there's just no replacing the artist, even with the fanciest of tools.

JASON KELLEY
I keep coming back to the term that, uh, was applied to me often when I was younger, which was “script kitty,” because I never learned how to program, but I was very good at finding some code and using it. And I think that a lot of people think that's the only thing that generative AI lets you do.

And it's clear that if you have the talent and the, and the resources and the experience, you can do way more. And that's what Nettrice can show people. I hope more people come away from this conversation thinking like, I have to jump onto this now because I'm really excited to do exactly the kinds of things that she's doing.

CINDY COHN
Yeah, you know, she made a piece of generative art every day for a year, right? I mean, first of all, she comes from an art background, but then, you know, you've got to really dive in, and I think that cool things can come out of it.

The other thing I really liked was her recognition that so much of our, our culture and our society and the things that we love about our world comes from, you know, people on the margins making do and making art with what they have.

And I love the image of gumbo as a thing that comes out of cultures that don't have access to the finest cuts of meat and seafood and instead build something else, and she paired that with an image of Sashiko stitching in Japan, which came out of people trying to think about how to make their clothes last longer and make them stronger. And this gorgeous art form came out of it.

And how we can think of today's tools, whether they're AI or, or others as another medium in which we can begin to make things a beauty or things that are useful out of, you know, maybe the dribs of drabs of something that was built for a corporate purpose.

JASON KELLEY
That's exactly right. And I also loved that. And I think we've discussed this before at EFF many times, but the comparison of the sort of generative AI tools to hip hop and to other forms of remix art, which I think probably a lot of people have made that connection, but I think it's, it's worth saying it again and again, because it is, it is such a, a sort of clear through line into those kinds of techniques and those kinds of art forms.

CINDY COHN
Yeah. And I think that, you know, from EFF's policy perspective, you know, one of the reasons that we stand up for fair use and think that it's so important is the recognition that arts like collage and like using generative AI, you know, they're not going to thrive if, if our model of how we control or monetize them is based on charging for every single little piece.

That's going to limit, just as it limited in hip hop, it's going to limit what kind of art we can get. And so that doesn't mean that we just shrug our shoulders and don't, you know, and say, forget it, artists, you're never going to be paid again.

JASON KELLEY
I guess we’re just never going to have hip hop or

CINDY COHN
Or the other side, which is we need to find a way, you know, we, we, there are lots of ways in which we compensate people for creation that aren't tied to individual control of individual artifacts. And, and I think in this age of AI, but in previous images as well, like the failure for us to look to those things and to embrace them, has real impacts for our culture and society.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.

In this episode, you heard Xena's Kiss slash Madea's Kiss by MWIC and Lost Track by Airtone featuring MWIC. You can find links to their music in our episode notes or on our website at EFF.org slash podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

EFF Appeals Order Denying Public Access to Patent Filings

Par : Aaron Mackey
3 juin 2024 à 13:36

It’s bad enough when a patent holder enforcing their rights in court try to exclude the public from those fights. What’s even worse is when courts endorse these secrecy tactics, just as a federal court hearing an EFF unsealing motion ruled in May. 

EFF continues to push for greater transparency in the case, Entropic Communications, LLC v. Charter Communications, Inc.,  and is asking a federal court of appeals to reverse the decision. A successful appeal will open this case to the public, and help everyone better understand patent disputes that are filed in the U.S. District Court for the Eastern District of Texas.

Secrecy in patent litigation is an enduring problem, and EFF has repeatedly intervened in lawsuits involving patent claims to uphold the public’s right to access court records. And in this case, the secrecy issues are heightened by the parties and the court believing that they can jointly agree to keep entire records under seal, without ever having to justify the secrecy. 

This case is a dispute between a semiconductor products provider, Entropic, and one of the nation's largest media companies, Charter, which offers cable television and internet service to millions of people. Entropic alleged that Charter infringed its patents (U.S. Patent Nos. 8,223,775; 8,284,690; 8,792,008; 9,210,362; 9,825,826; and 10,135,682) which cover cable modem technology. 

Charter has argued it had a license defense to the patent claims based on the industry-leading cable data transmission standard, Data Over Cable Service Interface Specification (DOCSIS). The argument could raise a core legal question in patent law: when is a particular patent “essential” to a technical standard and thus encumbered by licensing commitments?  

But so many of the documents filed in court about this legal argument are heavily redacted, making it difficult to understand. EFF filed to intervene and unseal these documents in March. EFF’s motion in part targeted a practice that is occurring in many patent disputes in the Texas district court, whereby parties enter into agreements, known as protective orders. These agreements govern how parties will protect information they exchange during the fact-gathering portion of a case. 

Under the terms of the model protective order created by the court, the parties can file documents they agree are secret under seal without having to justify that such secrecy overrides the public’s right to access court records. 

Despite federal appellate courts repeatedly ruling that protective orders cannot short-circuit the public’s right of access, the district court ruled that the documents EFF sought to unseal could remain secret precisely because the parties had agreed. Additionally, the district court ruled that EFF had no right to seek to unseal the records because it filed the motion to intervene and make the records public four months after the parties had settled. 

EFF is disappointed by the decision and strongly disagrees. Notably, the opinion does not cite any legal authority that allows parties to stipulate to keep their public court fights secret. As said above, many courts have ruled that such agreements are anathema to court transparency. 

Moreover, the court’s ruling that EFF could not even seek to unseal the documents in the first place sets a dangerous precedent. As a result many court dockets, including those with significant historic and newsworthy materials, can become permanently sealed merely because the public did not try to intervene and unseal records while the case was open. 

That outcome turns the public’s right of access to court records on its head: it requires the public to be extremely vigilant about court secrecy and punishes them for not knowing about sealed records. Yet the entire point of the presumption of public access is that judges and litigants in the cases are supposed to protect the public’s right to open courts, as not every member of the public has the time and resources to closely monitor court proceedings and hire a lawyer to enforce their public rights should they be violated.

EFF looks forward to vindicating the public’s right to access records on appeal. 

The Alaska Supreme Court Takes Aerial Surveillance’s Threat to Privacy Seriously, Other Courts Should Too

Par : Hannah Zhao
29 mai 2024 à 18:16

In March, the Alaska Supreme Court held in State v. McKelvey that the Alaska Constitution required law enforcement to obtain a warrant before photographing a private backyard from an aircraft. In this case, the police took photographs of Mr. McKelvey’s property, including the constitutionally protected curtilage area, from a small aircraft using a zoom lens.

In arguing that Mr. McKelvey did not have a reasonable expectation of privacy, the government raised various factors which have been used to justify warrantless surveillance in other jurisdictions. These included the ubiquity of small aircrafts flying overhead in Alaska; the commercial availability of the camera and lens; the availability of aerial footage of the land elsewhere; and the alleged unobtrusive nature of the surveillance. 

In response, the Court divorced the ubiquity and availability of the technology from whether people would reasonably expect the government to use it to spy on them. The Court observed that the fact the government spent resources to take photos demonstrates that whatever available images were insufficient for law enforcement needs. Also, the inability or unlikelihood the spying was detected adds to, not detracts from, its pernicious nature because “if the surveillance technique cannot be detected, then one can never fully protect against being surveilled.” 

Throughout its analysis, the Alaska Supreme Court demonstrated a grounded understanding of modern technology—as well as its future—and its effect on privacy rights. At the outset, the Court pointed out that one might think that this warrantless aerial surveillance was not a significant threat to privacy rights because "aviation gas is expensive, officers are busy, and the likelihood of detecting criminal activity with indiscriminate surveillance flights is low." However, the Court added pointedly, “the rise of drones has the potential to change that equation." We made similar arguments and are glad to see that courts are taking the threat seriously. 

This is a significant victory for Alaskans and their privacy rights, and stands in contrast to a couple of U.S. Supreme Court cases from the 1980s, Ciraolo v. California and Florida v. Riley. In those cases, the justices found no violation of the federal constitution for aerial surveillance from low-flying manned aircrafts. But there have been seismic changes in the capabilities of surveillance technology since those decisions, and courts should consider these developments rather than merely applying precedents uncritically. 

With this decision, Alaska joins California, Hawaii, and Vermont in finding that warrantless aerial surveillance violates their state’s constitutional prohibition of unreasonable search and seizure. Other courts should follow suit to ensure that privacy rights do not fall victim to the advancement of technology.

Don't Let the Sun Go Down on Section 230 | EFFector 36.7

Curious about the latest digital rights news? Well, you're in luck! In our latest newsletter we cover topics ranging from: lawmakers planning to sunset the most important law to free expression online, Section 230; our brief regarding data sharing of electronic ankle monitoring devices; and the simple proposition that no one country should be restricting speech across the entire internet.

It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.7 - Don't Let The Sun Go Down on Section 230

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

A Wider View on TunnelVision and VPN Advice

If you listen to any podcast long enough, you will almost certainly hear an advertisement for a Virtual Private Network (VPN). These advertisements usually assert that a VPN is the only tool you need to stop cyber criminals, malware, government surveillance, and online tracking. But these advertisements vastly oversell the benefits of VPNs. The reality is that VPNs are mainly useful for one thing: routing your network connection through a different network. Many people, including EFF, thought that VPNs were also a useful tool for encrypting your traffic in the scenario that you didn’t trust the network you were on, such as at a coffee shop, university, or hacker conference. But new research from Leviathan Security demonstrates a reminder that this may not be the case and highlights the limited use-cases for VPNs.

TunnelVision is a recently published attack method that can allow an attacker on a local network to force internet traffic to bypass your VPN and route traffic over an attacker-controlled channel instead. This allows the attacker to see any unencrypted traffic (such as what websites you are visiting). Traditionally, corporations deploy VPNs for employees to access private company sites from other networks. Today, many people use a VPN in situations where they don't trust their local network. But the TunnelVision exploit makes it clear that using an untrusted network is not always an appropriate threat model for VPNs because they will not always protect you if you can't trust your local network.

TunnelVision exploits the Dynamic Host Configuration Protocol (DHCP) to reroute traffic outside of a VPN connection. This preserves the VPN connection and does not break it, but an attacker is able to view unencrypted traffic. Think of DHCP as giving you a nametag when you enter the room at a networking event. The host knows at least 50 guests will be in attendance and has allocated 50 blank nametags. Some nametags may be reserved for VIP guests, but the rest can be allocated to guests if you properly RSVP to the event. When you arrive, they check your name and then assign you a nametag. You may now properly enter the room and be identified as "Agent Smith." In the case of computers, this “name” is the IP address DHCP assigns to devices on the network. This is normally done by a DHCP server but one could manually try it by way of clothespins in a server room.

TunnelVision abuses one of the configuration options in DHCP, called Option 121, where an attacker on the network can assign a “lease” of IPs to a targeted device. There have been attacks in the past like TunnelCrack that had similar attack methods, and chances are if a VPN provider addressed TunnelCrack, they are working on verifying mitigations for TunnelVision as well.

In the words of the security researchers who published this attack method:

“There’s a big difference between protecting your data in transit and protecting against all LAN attacks. VPNs were not designed to mitigate LAN attacks on the physical network and to promise otherwise is dangerous.”

Rather than lament the many ways public, untrusted networks can render someone vulnerable, there are many protections provided by default that can assist as well. Originally, the internet was not built with security in mind. Many have been working hard to rectify this. Today, we have other many other tools in our toolbox to deal with these problems. For example, web traffic is mostly encrypted with HTTPS. This does not change your IP address like a VPN could, but it still encrypts the contents of the web pages you visit and secures your connection to a website. Domain Name Servers (which occur before HTTPS in the network stack) have also been a vector for surveillance and abuse, since the requested domain of the website is still exposed at this level. There have been wide efforts to secure and encrypt this as well. Availability for encrypted DNS and HTTPS by default now exists in every major browser, closing possible attack vectors for snoops on the same network as you. Lastly, major browsers have implemented support for Encrypted Client Hello (ECH). Which encrypts your initial website connection, sealing off metadata that was originally left in cleartext.

TunnelVision is a reminder that we need to clarify what tools can and cannot do. A VPN does not provide anonymity online and neither can encrypted DNS or HTTPS (Tor can though). These are all separate tools that handle similar issues. Thankfully, HTTPS, encrypted DNS, and encrypted messengers are completely free and usable without a subscription service and can provide you basic protections on an untrusted network. VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

❌
❌