Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 26 décembre 2024Flux principal

2024 Year in Review

Par : Cindy Cohn
23 décembre 2024 à 10:50

It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest that 2024 was a big year. 

If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise, practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they need a warrant.  

For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes over the public imagination, we will have recourse against its harms. 

But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council meetings online.  

With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new technologies, new proposed laws, and legal precedent.  

Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.  

We will update this page with new stories about digital rights in 2024 every day between now and the new year. 

Defending Encryption in the U.S. and Abroad
EFF in the Press
The U.S. Supreme Court Continues its Foray into Free Speech and Tech
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It
We Stood Up for Access to the Law and Congress Listened
Police Surveillance in San Francisco
Fighting For Progress On Patents
Celebrating Digital Freedom with EFF Supporters
Surveillance Self-Defense
EU Tech Regulation—Good Intentions, Unclear Consequences

À partir d’avant-hierFlux principal

Amazon and Google Must Keep Their Promises on Project Nimbus

2 décembre 2024 à 14:52

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

The 2024 U.S. Election is Over. EFF is Ready for What's Next.

Par : Cindy Cohn
6 novembre 2024 à 11:56

The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high. 

No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running. 

During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally.  We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.

Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data. 

EFF is at its best when the stakes are high. 

But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech. 

Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid: 

  • Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.  
  • Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
  • Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress. 
  • Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions. 

We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user. 

We’re not merely fighting against individual companies or elected officials or even specific administrations.  We are fighting for you. That won’t stop no matter who’s in office. 

DONATE TODAY

Election Security: When to Worry, When to Not

This post was written by EFF intern Nazli Ungan as an update to a 2020 Deeplinks post by Cindy Cohn.

Everyone wants an election that is secure and reliable and that will ensure that the voters’ actual choices are reflected in the results. That’s as true as we head into the 2024 U.S. general elections as it always has been.

At the same time, not every problem in voting technology or systems is worth pulling the fire alarm—we have to look at the bigger story and context. And we have to stand down when our worst fears turn out to be unfounded.

Resilience is the key word when it comes to the security and the integrity of our elections. We need our election systems to be technically and procedurally resilient against potential attacks or errors. But equally important, we need the voting public to be resilient against false or unfounded claims of attack or error. Luckily, our past experiences and the work of election security experts have taught us a few lessons on when to worry and when to not.

See EFF's handout on Election Security here: https://www.eff.org/document/election-security-recommendations

We Need Risk-Limiting Audits

First, and most importantly, it is critical to have systems in place to support election technology and the election officials who run it. Machines may fail, humans may make errors. We cannot simply assume that there will not be any issues in voting and tabulation. Instead, there must be built-in safety measures that would catch any issues that may affect the official election results.  

It is critical to have systems in place to support election technology and the election officials who run it.

The most important of these is performing routine, post-election Risk-Limiting Audits after every election. RLAs should occur even if there is no apparent reason to suspect the accuracy of the results. Risk-limiting audits are considered the gold standard of post-election audits and they give the public justified confidence in the results. This type of audit entails manually checking randomly selected ballots until there is convincing evidence that the election outcome is correct. In many cases, it can be performed by counting only a small fraction of ballots cast making it cheap enough to be performed in every election. When the margins are tighter, a greater fraction of the votes are required to be hand counted, but this is a good thing because we want to scrutinize close contests more strictly to make sure the right person won the race. Some states have started requiring risk-limiting audits and the rest should catch up!

 We (and many others in the election integrity community) also continue to push for more transparency in election systems, more independent testing and red-team style attacks, including end-to-end pre-election testing.

And We Need A Paper Trail

Second, voting on paper ballots continues to be extremely important and the most secure strategy. Ideally, all voters should use paper ballots marked by hand, or with an assistive device, and verify their votes before casting. If there is no paper record, there is no way to perform a post-election audit, or recount votes in the event of an error or a security incident. On the other hand, if voters vote on paper, they can verify their choices are recorded accurately. More importantly, election officials can hand count a portion of the paper ballots to make sure they match with the electronic vote totals and confirm the accuracy of the election results. 

What happened in Antrim County, Michigan in the 2020 general elections illustrates the importance of paper ballots. Immediately after the 2020 elections, Antrim County published inaccurate unofficial results, and then restated these results three times to correct the errors, which led to conspiracy theories about the voting systems used there. Fortunately, Antrim County voters had voted on paper ballots, so Michigan was able to confirm the final presidential results by conducting a county-wide hand count and affirm them by a state-wide risk-limiting audit pilot. This would not have been possible without paper ballots.  

And we can’t stop there, because not every paper record is created equal. Some direct recording electronic systems are equipped with a type of Voter-Verified Paper Audit Trail that make it difficult for voters to verify their selections and for election officials to use in audits and recounts. The best practice is to have all votes cast on pre-printed paper ballots, marked by hand or an assistive ballot marking device.  

Third, it is important to have the entire voting technical system under the control of election officials so that they can investigate any potential problems, which is one of the reasons why internet voting remains a bad, bad idea. There are “significant security, privacy, and ballot secrecy challenges” associated with electronic ballot return systems and they make it possible for a single attacker to alter thousands or even millions of votes.” Maybe in the future we will have tools to limit the risks of internet voting. But until then, we should reject any proposal that includes electronic ballot return over the internet. Speaking about the internet, voting machines should never connect to the internet, dial a modem, or communicate wirelessly. 

Internet voting remains a bad, bad idea

Fourth, every part of the voting process that relies on technology must have paper backups so that voting can continue even when the machines fail. This includes paper backups for electronic pollbooks, emergency paper ballots in case voting machines fail, and provisional ballots in case there voter eligibility cannot be confirmed. 

Stay Vigilant and Informed

Fifth, we should continue to be vigilant. Election officials have come a long way from when we started raising concerns about electronic voting machines and systems. But the public should keep watching and, when warranted, not be afraid to raise or flag things that seem strange. For example, if you see something like voting machines “flipping” the votes, you should tell the poll workers. This doesn’t necessarily mean there has been a security breach; it can be as simple as a calibration error, but it can mean lost votes. Poll workers can and should address the issue immediately by providing voters with emergency paper ballots. 

Sixth, not everything that seems out of the ordinary may be reason to worry. We should build societal resistance to disinformation. CISA's Election Security Rumor vs. Reality website is a good resource that addresses election security rumors and educates us on when we need to be or don’t need to be alarmed. State-specific information is also available online. If we see or hear anything odd about what is happening at a particular locality, we should first hear what the election officials on the ground have to say about it. After all, they were there! We should also pay attention to what non-partisan election protection organizations, such as Verified Voting, say about the incident.  

The 2024 presidential election is fast approaching and there may be many claims of computer glitches and other forms of manipulation concerning our voting systems in November. Knowing when to worry and when NOT to worry will continue to be extremely important.  

In the meantime, the work of securing our elections and building resilience must continue. While not every glitch is worrisome, we should not dismiss legitimate security concerns. As often said: election security is a race without a finish line!

Salt Typhoon Hack Shows There's No Security Backdoor That's Only For The "Good Guys"

At EFF we’ve long noted that you cannot build a backdoor that only lets in good guys and not bad guys. Over the weekend, we saw another example of this: The Wall Street Journal reported on a major breach of U.S. telecom systems attributed to a sophisticated Chinese-government backed hacking group dubbed Salt Typhoon.

According to reports, the hack took advantage of systems built by ISPs like Verizon, AT&T, and Lumen Technologies (formerly CenturyLink) to give law enforcement and intelligence agencies access to the ISPs’ user data. This gave China unprecedented access to data related to U.S. government requests to these major telecommunications companies. It’s still unclear how much communication and internet traffic, and related to whom, Salt Typhoon accessed.

That’s right: the path for law enforcement access set up by these companies was apparently compromised and used by China-backed hackers. That path was likely created to facilitate smooth compliance with wrong-headed laws like CALEA, which require telecommunications companies to facilitate “lawful intercepts”—in other words, wiretaps and other orders by law enforcement and national security agencies. While this is a terrible outcome for user privacy, and for U.S. government intelligence and law enforcement, it is not surprising. 

The idea that only authorized government agencies would ever use these channels for acquiring user data was always risky and flawed. We’ve seen this before: in a notorious case in 2004 and 2005, more than 100 top officials in the Greek government were illegally surveilled for a period of ten months when unknown parties broke into Greece’s “lawful access” program. In 2024, with growing numbers of sophisticated state-sponsored hacking groups operating, it’s almost inevitable that these types of damaging breaches occur. The system of special law enforcement access that was set up for the “good guys” isn’t making us safer; it’s a dangerous security flaw. 

Internet Wiretaps Have Always Been A Bad Idea

Passed in 1994, CALEA requires that makers of telecommunications equipment provide the ability for government eavesdropping. In 2004, the government dramatically expanded this wiretap mandate to include internet access providers. EFF opposed this expansion and explained the perils of wiretapping the internet.  

The internet is different from the phone system in critical ways, making it more vulnerable. The internet is open and ever-changing.  “Many of the technologies currently used to create wiretap-friendly computer networks make the people on those networks more pregnable to attackers who want to steal their data or personal information,” EFF wrote, nearly 20 years ago.

Towards Transparency And Security

The irony should be lost on no one that now the Chinese government may be in possession of more knowledge about who the U.S. government spies on, including people living in the U.S., than Americans. The intelligence and law enforcement agencies that use these backdoor legal authorities are notoriously secretive, making oversight difficult. 

Companies and people who are building communication tools should be aware of these flaws and implement, where possible, privacy by default. As bad as this hack was, it could have been much worse if it wasn’t for the hard work of EFF and other privacy advocates making sure that more than 90% of web traffic is encrypted via HTTPS. For those hosting the 10% (or so) of the web that has yet to encrypt its traffic, now is a great time to consider turning on encryption, either using Certbot or switching to a hosting provider that offers HTTPS by default.

What can we do next? We must demand real privacy and security.  

That means we must reject the loud law enforcement and other voices that continue to pretend that there are “good guy only” ways to ensure access. We can point to this example, among many others, to push back on the idea that the default in the digital world is that governments (and malicious hackers) should be able to access all of our messages and files. We’ll continue to fight against US bills like EARN IT, the EU “Chat Control” file-scanning proposal, and the UK’s Online Safety Act, all of which are based on this flawed premise. 

It’s time for U.S. policymakers to step up too. If they care about China and other foreign countries engaging in espionage on U.S. citizens, it’s time to speak up in favor of encryption by default. If they don’t want to see bad actors take advantage of their constituents, domestic companies, or security agencies, again—speak up for encryption by default. Elected officials can and have done so in the past. Instead of holding hearings that give the FBI a platform to make digital wiretaps easier, demand accountability for the digital lock-breaking they’re already doing

The lesson will be repeated until it is learned: there is no backdoor that only lets in good guys and keeps out bad guys. It’s time for all of us to recognize this, and take steps to ensure real security and privacy for all of us.

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

This is the third post in a series highlighting flaws in the proposed UN Cybercrime Convention. Check out Part I, our detailed analysis on the criminalization of security research activities, and Part II, an analysis of the human rights safeguards.

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Background and Current Status of the UN Cybercrime Convention Negotiations

The UN Ad Hoc Committee overseeing the talks and preparation of a final text is expected to consider a revised but still-flawed text in its entirety, along with the interpretative notes, during the first week of the session, with a focus on all provisions not yet agreed ad referendum.[1] However, in keeping with the principle in multilateral negotiations that “nothing is agreed until everything is agreed,” any provisions of the draft that have already been agreed could potentially be reopened. 

The current text reveals significant disagreements among countries on crucial issues like the convention's scope and human rights protection. Of course the text could also get worse. Just when we thought Member States had removed many concerning crimes, they could reappear. The Ad-Hoc Committee Chair’s General Assembly resolution includes two additional sessions to negotiate not more protections, but the inclusion of more crimes. The resolution calls for “a draft protocol supplementary to the Convention, addressing, inter alia, additional criminal offenses.” Nevertheless, some countries still expect the latest draft to be adopted.

In this third post, we highlight the dangers of the currently proposed UN Cybercrime Convention's broad definition of "electronic data" and inadequate privacy and data protection safeguards.Together, these create the conditions for severe human rights abuses, transnational repression, and inconsistencies across countries in human rights protections.

A Closer Look to the Definition of Electronic Data

The proposed UN Cybercrime Convention significantly expands state surveillance powers under the guise of combating cybercrime. Chapter IV grants extensive government authority to monitor and access digital systems and data, categorizing data into communications data: subscriber data, traffic data, and content data. But it also makes use of a catch-all category called "electronic data." Article 2(b) defines electronic data as "any representation of facts, information, or concepts in a form suitable for processing in an information and communications technology system, including a program suitable to cause an information and communications technology system to perform a function."

"Electronic data," is eligible for three surveillance powers: preservation orders (Article 25), production orders (Article 27), and search and seizure (Article 28). Unlike the other traditional categories of traffic data, subscriber data and content data, "electronic data" refers to any data stored, processed, or transmitted electronically, regardless of whether it has been communicated to anyone. This includes documents saved on personal computers or notes stored on digital devices. In essence, this means that private unshared thoughts and information are no longer safe. Authorities can compel the preservation, production, or seizure of any electronic data, potentially turning personal devices into spy vectors regardless of whether the information has been communicated.

This is delicate territory, and it deserves careful thought and real protection—many of us now use our devices to keep our most intimate thoughts and ideas, and many of us also use tools like health and fitness tools in ways that we do not intend to share. This includes data stored on devices, such as face scans and smart home device data, if they remain within the device and are not transmitted. Another example could be photos that someone takes on a device but doesn't share with anyone. This category threatens to turn our most private thoughts and actions over to spying governments, both our own and others. 

And the problem is worse when we consider emerging technologies. The sensors in smart devices, AI, and augmented reality glasses, can collect a wide array of highly sensitive data. These sensors can record involuntary physiological reactions to stimuli, including eye movements, facial expressions, and heart rate variations. For example, eye-tracking technology can reveal what captures a user's attention and for how long, which can be used to infer interests, intentions, and even emotional states. Similarly, voice analysis can provide insights into a person's mood based on tone and pitch, while body-worn sensors might detect subtle physical responses that users themselves are unaware of, such as changes in heart rate or perspiration levels.

These types of data are not typically communicated through traditional communication channels like emails or phone calls (which would be categorized as content or traffic data). Instead, they are collected, stored, and processed locally on the device or within the system, fitting the broad definition of "electronic data" as outlined in the draft convention.

Such data likely has been harder to obtain because it may have not been communicated to or possessed by any communications intermediary or system. So it’s an  example of how the broad term "electronic data" increases the kinds (and sensitivity) of information about us that can be targeted by law enforcement through production orders or by search and seizure powers. These emerging technology uses are their own category, but they are most like "content" in communications surveillance, which usually has high protection. “Electronic data” must have equal protection as “content” of communication, and be subject to ironclad data protection safeguards, which the propose treaty fails to provide, as we will explain below.

The Specific Safeguard Problems

Like other powers in the draft convention, the broad powers related to "electronic data" don't come with specific limits to protect fair trial rights. 

Missing Safeguards

For example, many countries' have various kinds of information that is protected by a legal “privilege” against surveillance: attorney-client privilege, the spousal privilege, the priest-penitent privilege, doctor-patient privileges, and many kinds of protections for confidential business information and trade secrets. Many countries, also give additional protections for journalists and their sources. These categories, and more, provide varying degrees of extra requirements before law enforcement may access them using production orders or search-and-seizure powers, as well as various protections after the fact, such as preventing their use in prosecutions or civil actions. 

Similarly, the convention lacks clear safeguards to prevent authorities from compelling individuals to provide evidence against themselves. These omissions raise significant red flags about the potential for abuse and the erosion of fundamental rights when a treaty text involves so many countries with a high disparity of human rights protections.

The lack of specific protections for criminal defense is especially troubling. In many legal systems, defense teams have certain protections to ensure they can effectively represent their clients, including access to exculpatory evidence and the protection of defense strategies from surveillance. However, the draft convention does not explicitly protect these rights, which both misses the chance to require all countries to provide these minimal protections and potentially further undermines the fairness of criminal proceedings and the ability of suspects to mount an effective defense in countries that either don’t provide those protections or where they are not solid and clear.

Even the State “Safeguards” in Article 24 are Grossly Insufficient

Even where the convention’s text discusses “safeguards,” the convention doesn’t actually protect people. The “safeguard” section, Article 24, fails in several obvious ways: 

Dependence on Domestic Law: Article 24(1) makes safeguards contingent on domestic law, which can vary significantly between countries. This can result in inadequate protections in states where domestic laws do not meet high human rights standards. By deferring safeguards to national law, Article 24 weakens these protections, as national laws may not always provide the necessary safeguards. It also means that the treaty doesn’t raise the bar against invasive surveillance, but rather confirms even the lowest protections.

A safeguard that bends to domestic law isn't a safeguard at all if it leaves the door open for abuses and inconsistencies, undermining the protection it's supposed to offer.

Discretionary Safeguards: Article 24(2) uses vague terms like “as appropriate,” allowing states to interpret and apply safeguards selectively. This means that while the surveillance powers in the convention are mandatory, the safeguards are left to each state’s discretion. Countries decide what is “appropriate” for each surveillance power, leading to inconsistent protections and potential weakening of overall safeguards.

Lack of Mandatory Requirements: Essential protections such as prior judicial authorization, transparency, user notification, and the principle of legality, necessity and non-discrimination are not explicitly mandated. Without these mandatory requirements, there is a higher risk of misuse and abuse of surveillance powers.

No Specific Data Protection Principles: As we noted above, the proposed treaty does not include specific safeguards for highly sensitive data, such as biometric or privileged data. This oversight leaves such information vulnerable to misuse.

Inconsistent Application: The discretionary nature of the safeguards can lead to their inconsistent application, exposing vulnerable populations to potential rights violations. Countries might decide that certain safeguards are unnecessary for specific surveillance methods, which the treaty allows, increasing the risk of abuse.

Finally, Article 23(4) of Chapter IV authorizes the application of Article 24 safeguards to specific powers within the international cooperation chapter (Chapter V). However, significant powers in Chapter V, such as those related to law enforcement cooperation (Article 47) and the 24/7 network (Article 41) do not specifically cite the corresponding Chapter IV powers and so may not be covered by Article 24 safeguards.

Search and Seizure of Stored Electronic Data

The proposed UN Cybercrime Convention significantly expands government surveillance powers, particularly through Article 28, which deals with the search and seizure of electronic data. This provision grants authorities sweeping abilities to search and seize data stored on any computer system, including personal devices, without clear, mandatory privacy and data protection safeguards. This poses a serious threat to privacy and data protection.

Article 28(1) allows authorities to search and seize any “electronic data” in an information and communications technology (ICT) system or data storage medium. It lacks specific restrictions, leaving much to the discretion of national laws. This could lead to significant privacy violations as authorities might access all files and data on a suspect’s personal computer, mobile device, or cloud storage account—all without clear limits on what may be targeted or under what conditions.

Article 28(2) permits authorities to search additional systems if they believe the sought data is accessible from the initially searched system. While judicial authorization should be a requirement to assess the necessity and proportionality of such searches, Article 24 only mandates “appropriate conditions and safeguards” without explicit judicial authorization. In contrast, U.S. law under the Fourth Amendment requires search warrants to specify the place to be searched and the items to be seized—preventing unreasonable searches and seizures.

Article 28(3) empowers authorities to seize or secure electronic data, including making and retaining copies, maintaining its integrity, and rendering it inaccessible or removing it from the system. For publicly accessible data, this takedown process could infringe on free expression rights and should be explicitly subject to free expression standards to prevent abuse.

Article 28(4) requires countries to have laws that allow authorities to compel anyone who knows how a particular computer or device works to provide necessary information to access it. This could include asking a tech expert or an engineer to help unlock a device or explain its security features. This is concerning because it might force people to help law enforcement in ways that could compromise security or reveal confidential information. For example, an engineer could be required to disclose a security flaw that hasn't been fixed, or to provide encryption keys that protect data, which could then be misused. The way it is written, it could be interpreted to include disproportionate orders that can lead to forcing persons to disclose a vulnerability to the government that hasn’t been fixed. It could also imply forcing people to disclose encryption keys such as signing keys on the basis that these are “the necessary information to enable” some form of surveillance.

Privacy International and EFF strongly recommend Article 28.4 be removed in its entirety. Instead, it has been agreed ad referendum. At least, the drafters must include material in the explanatory memorandum that accompanies the draft Convention to clarify limits to avoid forcing technologists to reveal confidential information or do work on behalf of law enforcement against their will. Once again, it would also be appropriate to have clear legal standards about how law enforcement can be authorized to seize and look through people’s private devices.

In general, production and search and seizure orders might be used to target tech companies' secrets, and require uncompensated labor by technologists and tech companies, not because they are evidence of crime but because they can be used to enhance law enforcement's technical capabilities.

Domestic Expedited Preservation Orders of Electronic Data

Article 25 on preservation orders, already agreed ad referendum, is especially problematic. It’s very broad, and will result in individuals’ data being preserved and available for use in prosecutions far more than needed. It also fails to include necessary safeguards to avoid abuse of power. By allowing law enforcement to demand preservation with no factual justification, it risks spreading familiar deficiencies in U.S. law worldwide.

Article 25 requires each country to create laws or other measures that let authorities quickly preserve specific electronic data, particularly when there are grounds to believe that such data is at risk of being lost or altered.

Article 25(2) ensures that when preservation orders are issued, the person or entity in possession of the data must keep it for up to 90 days, giving authorities enough time to obtain the data through legal channels, while allowing this period to be renewed. There is no specified limit on the number of times the order can be renewed, so it can potentially be reimposed indefinitely.

Preservation orders should be issued only when they’re absolutely necessary, but Article 24 does not mention the principle of necessity and lacks individual notice and explicit grounds requirements and statistical transparency obligations.

The article must limit the number of times preservation orders may be renewed to prevent indefinite data preservation requirements. Each preservation order renewal must require a demonstration of continued necessity and factual grounds justifying continued preservation.

Article 25(3) also compels states to adopt laws that enable gag orders to accompany preservation orders, prohibiting service providers or individuals from informing users that their data was subject to such an order. The duration of such a gag order is left up to domestic legislation.

As with all other gag orders, the confidentiality obligation should be subject to time limits and only be available to the extent that disclosure would demonstrably threaten an investigation or other vital interest. Further, individuals whose data was preserved should be notified when it is safe to do so without jeopardizing an investigation. Independent oversight bodies must oversee the application of preservation orders.

Indeed, academics such as prominent law professor and former U.S. Department of Justice lawyer Orin S. Kerr have criticized similar U.S. data preservation practices under 18 U.S.C. § 2703(f) for allowing law enforcement agencies to compel internet service providers to retain all contents of an individual's online account without their knowledge, any preliminary suspicion, or judicial oversight. This approach, intended as a temporary measure to secure data until further legal authorization is obtained, lacks the foundational legal scrutiny typically required for searches and seizures under the Fourth Amendment, such as probable cause or reasonable suspicion.

The lack of explicit mandatory safeguards raise similar concerns about Article 25 of the proposed UN convention. Kerr argues that these U.S. practices constitute a "seizure" under the Fourth Amendment, indicating that such actions should be justified by probable cause or, at the very least, reasonable suspicion—criteria conspicuously absent in the current draft of the UN convention.

By drawing on Kerr's analysis, we see a clear warning: without robust safeguards— including an explicit grounds requirement, prior judicial authorization, explicit notification to users, and transparency—preservation orders of electronic data proposed under the draft UN Cybercrime Convention risk replicating the problematic practices of the U.S. on a global scale.

Production Orders of Electronic Data

Article 27(a)’s treatment of “electronic data” in production orders, in light of the draft convention’s broad definition of the term, is especially problematic.

This article, which has already been agreed ad referendum, allows production orders to be issued to custodians of electronic data, requiring them to turn over copies of that data. While demanding customer records from a company is a traditional governmental power, this power is dramatically increased in the draft convention.

As we explain above, the extremely broad definition of electronic data, which is often sensitive in nature, raises new and significant privacy and data protection concerns, as it permits authorities to access potentially sensitive information without immediate oversight and prior judicial authorization. The convention needs instead to require prior judicial authorization before such information can be demanded from the companies that hold it. 

This ensures that an impartial authority assesses the necessity and proportionality of the data request before it is executed. Without mandatory data protection safeguards for the processing of personal data, law enforcement agencies might collect and use personal data without adequate restrictions, thereby risking the exposure and misuse of personal information.

The text of the convention fails to include these essential data protection safeguards. To protect human rights, data should be processed lawfully, fairly, and in a transparent manner in relation to the data subject. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner that is incompatible with those purposes. 

Data collected should be adequate, relevant, and limited to what is necessary to the purposes for which they are processed. Authorities should request only the data that is essential for the investigation. Production orders should clearly state the purpose for which the data is being requested. Data should be kept in a format that permits identification of data subjects for no longer than is necessary for the purposes for which the data is processed. None of these principles are present in Article 27(a) and they must be. 

International Cooperation and Electronic Data

The draft UN Cybercrime Convention includes significant provisions for international cooperation, extending the reach of domestic surveillance powers across borders, by one state on behalf of another state. Such powers, if not properly safeguarded, pose substantial risks to privacy and data protection. 

  • Article 42 (1) (“International cooperation for the purpose of expedited preservation of stored electronic data”) allows one state to ask another to obtain preservation of “electronic data” under the domestic power outlined in Article 25. 
  • Article 44 (1) (“Mutual legal assistance in accessing stored electronic data”) allows one state to ask another “to search or similarly access, seize or similarly secure, and disclose electronic data,” presumably using powers similar to those under Article 28, although that article is not referenced in Article 44. This specific provision, which has not yet been agreed ad referendum, enables comprehensive international cooperation in accessing stored electronic data. For instance, if Country A needs to access emails stored in Country B for an ongoing investigation, it can request Country B to search and provide the necessary data.

Countries Must Protect Human Rights or Reject the Draft Treaty

The current draft of the UN Cybercrime Convention is fundamentally flawed. It dangerously expands surveillance powers without robust checks and balances, undermines human rights, and poses significant risks to marginalized communities. The broad and vague definitions of "electronic data," coupled with weak privacy and data protection safeguards, exacerbate these concerns.

Traditional domestic surveillance powers are particularly concerning as they underpin international surveillance cooperation. This means that one country can easily comply with the requests of another, which if not adequately safeguarded, can lead to widespread government overreach and human rights abuses. 

Without stringent data protection principles and robust privacy safeguards, these powers can be misused, threatening human rights defenders, immigrants, refugees, and journalists. We urgently call on all countries committed to the rule of law, social justice, and human rights to unite against this dangerous draft. Whether large or small, developed or developing, every nation has a stake in ensuring that privacy and data protection are not sacrificed. 

Significant amendments must be made to ensure these surveillance powers are exercised responsibly and protect privacy and data protection rights. If these essential changes are not made, countries must reject the proposed convention to prevent it from becoming a tool for human rights violations or transnational repression.

[1] In the context of treaty negotiations, "ad referendum" means that an agreement has been reached by the negotiators, but it is subject to the final approval or ratification by their respective authorities or governments. It signifies that the negotiators have agreed on the text, but the agreement is not yet legally binding until it has been formally accepted by all parties involved.

34 Years Supporting the Wild and Weird World Online

Par : Cindy Cohn
10 juillet 2024 à 03:34

Oh the stories I could tell you about EFF's adventures anchoring the digital rights movement. Clandestine whistleblowers. Secret rooms. Encryption cracking. Airships over mass spying facilities. Even appearances from a badger, a purple dinosaur, and an adorable toddler dancing to Prince. EFF emerged as a proud friend to creators and users alike in this wild and weird world online—and we’re still at it.

Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

Today the Electronic Frontier Foundation commemorates its 34th anniversary of battling for your digital freedom. It’s important to glean wisdom from where we have been, but at EFF we're also strong believers that this storied past helps us build a positive future. Central to our work is supporting the unbounded creativity on the internet and the people who are, even today, imagining what a better world looks like.

That’s why EFF’s lawyers, activists, policy analysts, and technologists have been on your side since 1990. I’ve seen magical things happen when you—not the companies or governments around you—can determine how you engage with technology. When those stars align, social movements can thrive, communities can flourish, and the internet’s creativity blossoms.

The web plays a crucial role in lifting up the causes you believe in, whatever they may be. These transformative moments are only possible when there is ample space for your privacy, your creativity, and your ability to express yourself freely. No matter where threats may arise, know that EFF is by your side armed with unparalleled expertise and the will to defend the public interest.

I am deeply thankful for people like you who support internet freedom and who value EFF’s role in the movement. It’s a team effort.

One More Day for Summer Treats

Leading up to EFF’s anniversary today, we’ve been having some fun with campfire tales from The Encryptids. We reimagined folktales about cryptids, like Bigfoot and the jackalope, from the perspective of creatures who just want what we all want: a privacy-protective, creative web that lifts users up with technology that respects critical rights and freedoms!

As EFF’s 34th birthday gift to you, I invite you to join EFF for just $20 today and you’ll get two limited-time gifts featuring The Encryptids. On top of that, Craig Newmark Philanthropies will match up to $30,000 for your first year as a monthly or annual Sustaining Donor! Many thanks to Craig—founder of Craigslist and a persistent supporter of digital freedom—for making this possible.

Join EFF

For the Future of Privacy, Security, & Free Expression

We at EFF take our anniversary as an opportunity to applaud our partners, celebrate supporters like you, and appreciate our many successes for privacy and free expression. But we never lose sight of the critical job ahead. Thank you for supporting EFF in our mission to ensure that technology supports freedom, justice, and innovation for all people of the world.

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

U.S. Senate and Biden Administration Shamefully Renew and Expand FISA Section 702, Ushering in a Two Year Expansion of Unconstitutional Mass Surveillance

One week after it was passed by the U.S. House of Representatives, the Senate has passed what Senator Ron Wyden has called, “one of the most dramatic and terrifying expansions of government surveillance authority in history.” President Biden then rushed to sign it into law.  

The perhaps ironically named “Reforming Intelligence and Securing America Act (RISAA)” does everything BUT reform Section 702 of the Foreign Intelligence Surveillance Act (FISA). RISAA not only reauthorizes this mass surveillance program, it greatly expands the government’s authority by allowing it to compel a much larger group of people and providers into assisting with this surveillance. The bill’s only significant “compromise” is a limited, two-year extension of this mass surveillance. But overall, RISAA is a travesty for Americans who deserve basic constitutional rights and privacy whether they are communicating with people and services inside or outside of the US.

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large telecommunications service providers: massive amounts of traffic on the Internet backbone are accessed and those communications on the government’s secret list are copied. And that’s just one part of the massive, expensive program. 

While Section 702 prohibits the NSA and FBI from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally.” The government can then conduct backdoor, warrantless searches of these “incidentally collected” communications.

The government cannot even follow the very lenient rules about what it does with the massive amount of information it gathers under Section 702, repeatedly abusing this authority by searching its databases for Americans’ communications. In 2021 alone, the FBI reported conducting up to 3.4 million warrantless searches of Section 702 data using Americans’ identifiers. Given this history of abuse, it is difficult to understand how Congress could decide to expand the government’s power under Section 702 rather than rein it in.

One of RISAA’s most egregious expansions is its large but ill-defined increase of the range of entities that have to turn over information to the NSA and FBI. This provision allegedly “responds” to a 2023 decision by the FISC Court of Review, which rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While the New York Times reports that the unknown company from this FISC opinion was a data center, this new provision is written so expansively that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider. This could potentially include landlords, maintenance people, and many others who routinely have access to your communications on the interconnected internet.

This is to say nothing of RISAA’s other substantial expansions. RISAA changes FISA’s definition of “foreign intelligence” to include “counternarcotics”: this will allow the government to use FISA to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths,” but also to any of their precursors. While surveillance under FISA has (contrary to what most Americans believe) never been limited exclusively to terrorism and counterespionage, RISAA’s expansion of FISA to ordinary crime is unacceptable.

RISAA also allows the government to use Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released in 2023, the FISC repeatedly denied government attempts to obtain some version of this authority, before finally approving it for the first time in 2023. By formally lowering Section 702’s protections for immigrants and asylum seekers, RISAA exacerbates the risk that government officials could discriminate against members of these populations on the basis of their sexuality, gender identity, religion, or political beliefs.

Faced with massive pushback from EFF and other civil liberties advocates, some members of Congress, like Senator Ron Wyden, raised the alarm. We were able to squeeze out a couple of small concessions. One was a shorter reauthorization period for Section 702, meaning that the law will be up for review in just two more years. Also, in a letter to Congress, the Department of Justice claimed it would only interpret the new provision to apply to the type of unidentified businesses at issue in the 2023 FISC opinion. But a pinky promise from the current Department of Justice is not enforceable and easily disregarded by a future administration. There is some possible hope here, because Senator Mark Warner promised to return to the provision in a later defense authorization bill, but this whole debacle just demonstrates how Congress gives the NSA and FBI nearly free rein when it comes to protecting Americans – any limitation that actually protects us (and here the FISA Court actually did some protecting) is just swept away.

RISAA’s passage is a shocking reversal—EFF and our allies had worked hard to put together a coalition aimed at enacting a warrant requirement for Americans and some other critical reforms, but the NSA, FBI and their apologists just rolled Congress with scary-sounding (and incorrect) stories that a lapse in the spying was imminent. It was a clear dereliction of Congress’s duty to oversee the intelligence community in order to protect all of the rest of us from its long history of abuse.

After over 20 years of doing it, we know that rolling back any surveillance authority, especially one as deeply entrenched as Section 702, is an uphill fight. But we aren’t going anywhere. We had more Congressional support this time than we’ve had in the past, and we’ll be working to build that over the next two years.

Too many members of Congress (and the Administrations of both parties) don’t see any downside to violating your privacy and your constitutional rights in the name of national security. That needs to change.

Bad Amendments to Section 702 Have Failed (For Now)—What Happens Next?

Yesterday, the House of Representatives voted against considering a largely bad bill that would have unacceptably expanded the tentacles of Section 702 of the Foreign Intelligence Surveillance Act, along with reauthorizing it and introducing some minor fixes. Section 702 is Big Brother’s favorite mass surveillance law that EFF has been fighting since it was first passed in 2008. The law is currently set to expire on April 19. 

Yesterday’s decision not to decide is good news, at least temporarily. Once again, a bipartisan coalition of law makers—led by Rep. Jim Jordan and Rep. Jerrold Nadler—has staved off the worst outcome of expanding 702 mass surveillance in the guise of “reforming” it. But the fight continues and we need all Americans to make their voices heard. 

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: 702 Needs serious reforms

Yesterday’s vote means the House also will not consider amendments to Section 702 surveillance introduced by members of the House Judiciary Committee (HJC) and House Permanent Select Committee on Intelligence (HPSCI). As we discuss below, while the HJC amendments would contain necessary, minimum protections against Section 702’s warrantless surveillance, the HPSCI amendments would impose no meaningful safeguards upon Section 702 and would instead increase the threats Section 702 poses to Americans’ civil liberties.

Section 702 expressly authorizes the government to collect foreign communications inside the U.S. for a wide range of purposes, under the umbrellas of national security and intelligence gathering. While that may sound benign for Americans, foreign communications include a massive amount of Americans’ communications with people (or services) outside the United States. Under the government’s view, intelligence agencies and even domestic law enforcement should have backdoor, warrantless access to these “incidentally collected” communications, instead of having to show a judge there is a reason to query Section 702 databases for a specific American's communications.

Many amendments to Section 702 have recently been introduced. In general, amendments from members of the HJC aim at actual reform (although we would go further in many instances). In contrast, members of HPSCI have proposed bad amendments that would expand Section 702 and undermine necessary oversight. Here is our analysis of both HJC’s decent reform amendments and HPSCI’s bad amendments, as well as the problems the latter might create if they return.

House Judiciary Committee’s Amendments Would Impose Needed Reforms

The most important amendment HJC members have introduced would require the government to obtain court approval before querying Section 702 databases for Americans’ communications, with exceptions for exigency, consent, and certain queries involving malware. As we recently wrote regarding a different Section 702 bill, because Section 702’s warrantless surveillance lacks the safeguards of probable cause and particularity, it is essential to require the government to convince a judge that there is a justification before the “separate Fourth Amendment event” of querying for Americans’ communications. This is a necessary, minimum protection and any attempts to renew Section 702 going forward should contain this provision.

Another important amendment would prohibit the NSA from resuming “abouts” collection. Through abouts collection, the NSA collected communications that were neither to nor from a specific surveillance target but merely mentioned the target. While the NSA voluntarily ceased abouts collection following Foreign Intelligence Surveillance Court (FISC) rulings that called into question the surveillance’s lawfulness, the NSA left the door open to resume abouts collection if it felt it could “work that technical solution in a way that generates greater reliability.” Under current law, the NSA need only notify Congress when it resumes collection. This amendment would instead require the NSA to obtain Congress’s express approval before it can resume abouts collection, which―given this surveillance's past abuses—would be notable.

The other HJC amendment Congress should accept would require the FBI to give a quarterly report to Congress of the number of queries it has conducted of Americans’ communications in its Section 702 databases and would also allow high-ranking members of Congress to attend proceedings of the notoriously secretive FISC. More congressional oversight of FBI queries of Americans’ communications and FISC proceedings would be good. That said, even if Congress passes this amendment (which it should), both Congress and the American public deserve much greater transparency about Section 702 surveillance.  

House Permanent Select Committee on Intelligence’s Amendments Would Expand Section 702

Instead of much-needed reforms, the HPSCI amendments expand Section 702 surveillance.

One HPSCI amendment would add “counternarcotics” to FISA’s definition of “foreign intelligence information,” expanding the scope of mass surveillance even further from the antiterrorism goals that most Americans associate with FISA. In truth, FISA’s definition of “foreign intelligence information” already goes beyond terrorism. But this counternarcotics amendment would further expand “foreign intelligence information” to allow FISA to be used to collect information relating to not only the “international production, distribution, or financing of illicit synthetic drugs, opioids, cocaine, or other drugs driving overdose deaths” but also to any of their precursors. Given the massive amount of Americans’ communications the government already collects under Section 702 and the government’s history of abusing Americans’ civil liberties through searching these communications, the expanded collection this amendment would permit is unacceptable.

Another amendment would authorize using Section 702 to vet immigrants and those seeking asylum. According to a FISC opinion released last year, the government has sought some version of this authority for years, and the FISC repeatedly denied it—finally approving it for the first time in 2023. The FISC opinion is very redacted, which makes it impossible to know either the current scope of immigration and visa-related surveillance under Section 702 or what the intelligence agencies have sought in the past. But regardless, it’s deeply concerning that HPSCI is trying to formally lower Section 702 protections for immigrants and asylum seekers. We’ve already seen the government revoke people’s visas based upon their political opinions—this amendment would put this kind of thing on steroids.

The last HPSCI amendment tries to make more companies subject to Section 702’s required turnover of customer information in more instances. In 2023, the FISC Court of Review rejected the government’s argument that an unknown company was subject to Section 702 for some circumstances. While we don’t know the details of the secret proceedings because the FISC Court of Review opinion is heavily redacted, this is an ominous attempt to increase the scope of providers subject to 702. With this amendment, HPSCI is attempting to legislatively overrule a court already famously friendly to the government. HPSCI Chair Mike Turner acknowledged as much in a House Rules Committee hearing earlier this week, stating that this amendment “responds” to the FISC Court of Review’s decision.

What’s Next 

This hearing was unlikely to be the last time Congress considers Section 702 before April 19—we expect another attempt to renew this surveillance authority in the coming days. We’ve been very clear: Section 702 must not be renewed without essential reforms that protect privacy, improve transparency, and keep the program within the confines of the law. 

Take action

TELL congress: 702 Needs serious reforms

2023 Year in Review

Par : Cindy Cohn
21 décembre 2023 à 11:00

At the end of every year, we look back at the last 12 months and evaluate what has changed for the better (and worse) for digital rights.  While we can be frustratedhello ongoing attacks on encryptionoverall it's always an exhilarating reminder of just how far we've come since EFF was founded over 33 years ago. Just the scale alone it's breathtaking. Digital rights started as a niche, future-focused issue that we would struggle to explain to nontechnical people; now it's deeply embedded into all of our lives.

The legislative, court, and agency fights around the world this year also helped us see and articulate a common thread: the need for a "privacy first" approach to laws and technology innovation.  As we wrote in a new white paper aptly entitled "Privacy First: A Better Way to Address Online Harms," many of the ills of today’s internet have a single thing in common, and it is that they are built on a business model of corporate surveillance and behavioral advertising.  Addressing that problem could help us make great strides in a range of issues, and avoid many of the the terrible likely impacts of many of today's proposed "solutions."

Instead of considering proposals that would censor speech and put children's access to internet resources at the whims of state attorneys general, we could be targeting the root cause of the concern: internet companies' collection, storage, sales, and use of our personal information and activities to feed their algorithms and ad services. Police go straight to tech companies for your data or the data on everyone who was near a certain location.  And that's when they even bother with a court-overseen process, rather than simply issuing a subpoena, showing up and demanding it, or buying data from data brokers. If we restricted what data tech companies could keep and for how long, we could also tackle this problem at the source. Instead of unconstitutional link taxes to save local journalism, laws that attack behavioral advertising--built on collection of data--would break the ad and data monopoly that put journalists at the mercy of Big Tech in the first place.

Concerns about what is feeding AI, social media algorithms, government spying (either your own or another country's), online harassment, getting access to healthcare--so much can be better protected if we address privacy first. EFF knows this, and it's why, in 2023, we did things like launch the Tor University Challenge, urge the Supreme Court to recognize that the Fifth Amendment protects you from being forced to give your phone's passcode to police, and work to fix the dangerously flawed UN Cybercrime Treaty. Most recently, we celebrated Google's decision to limit the data collected and kept in its "Location History" as a potentially huge step to prevent geofence warrants that use Google's storehouse of location data to conduct massive, unconstitutional searches sweeping in many innocent bystanders. 

Of course, as much as individuals need more privacy, we also need more transparency, especially from our governments and the big corporations that rule so much of our digital lives. That's why EFF urged the Supreme Court to overturn an order preventing Twitternow Xfrom publishing a transparency report with data about what, exactly, government agents have asked the company for. It's why we won an important victory in keeping laws and regulations online and accessible. And it's why we defended the Internet Archive from an attack by major publishers seeking to cripple libraries' ability to give the rest of us access to knowledge into the digital age.

All of that barely scratches the surface of what we've been doing this year. But none of it would be possible without the strong partnership of our members, supporters, and all of you who stood up and took action to build a better future. 

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2023 every day between now and the new year.

Does Less Consumer Tracking Lead to Less Fraud?

Par : Cindy Cohn
18 décembre 2023 à 14:59

Here’s another reason to block digital surveillance: it might reduce financial fraud.  That’s the upshot of a small but promising study published as a National Bureau of Economic Research (NBER) working paper, “Consumer Surveillance and Financial Fraud. 

Authors Bo Bian, Michaela Pagel and Huan Tang investigated the relationship between the rollout of Apple’s App Tracking Transparency (ATT) and reports of consumer financial fraud. Many apps can track users across apps or websites owned by other companies. By default, Apple's ATT opted all iPhone users out of tracking, which meant that apps and websites no longer received user identifiers unless they obtained user permission. 

The highlight of the research is that Apple users were less likely to be victims of financial fraud after Apple implemented the App Tracking Transparency policy. The results showed a 10% increase in the share of Apple users in a particular ZIP code leads to roughly 3% reduction in financial fraud complaints. 

The Methodology 

The authors designed a complicated methodology for this study, but here are the basics for those who don’t have time to tackle the actual paper. 

The authors primarily use the number of financial fraud complaints and the amount of money lost due to fraud to track how much fraud is happening. These figures are obtained from the Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC). The researchers used machine learning and keyword searches to narrow the complaints down to those related to financial fraud that was caused by lax data privacy as opposed to other types of financial fraud. They concluded that complaints in certain product categories—like credit reporting and debt collection—are most likely to implicate the lack of data privacy. 

The study used data acquired by a company called Safegraph to determine the share of iPhone users on ZIP code level. It then estimated the effect of Apple’s ATT,on the number of complaints of financial fraud in each ZIP code. They found a noticeable, measurable reduction in complaints for iPhone users after ATT was implemented. The researchers also investigated variation in this reduction across different demographic groups. They found that the effect is stronger for minorities, women, and younger people—suggesting that these groups, which may have been more vulnerable to fraud before, saw a greater increase in protection when Apple turned on ATT.  

To test the accuracy and reliability of their results, the researchers employed many different methods typically used in a statistical analysis. These include placebo tests, robustness check, and Poisson regression. In lay terms, these methods test the results against assumptions, the potential effect of other factors and alternative specifications, and variable conditions. 

These methods help establish causation (as opposed to mere correlation), in part by ruling out other possible causes. Although one can never be 100% sure that a result was caused by something in a regression analysis, these methods are popularly used to reasonably infer causation and the report meticulously applies them. 

What This Means 

While the scope of the data is small, this is the first significant research we’ve seen that connects increased privacy with decreased fraud. This should matter to all of us. It reinforces that when companies take steps to protect our privacy, they also help protect us from financial fraud. This is a point we made in our Privacy First whitepaper, which discusses the many harms that a robust privacy system can protect us from.  Lawmakers and regulators should take note.   

In implementing ATT, Apple has proven something EFF has long said: with over 75% of consumers as of May 2022 keeping all tracking off rather than opting in, it’s clear that most consumers want more privacy than they are currently getting through the surveillance business model. Now, with this research it seems that when they get more privacy, they also get some protection against fraud as well.   

Of course, we are not done pushing Apple or anyone else on stepping up for our privacy. As Professor Zeynep Tufekci noted in a recent NY Times column, “I was happy to see Apple switch the defaults for tracking in 2021, but I’m not happy that it was because of a decision by one powerful company—what oligopoly giveth, oligopoly can taketh away. We didn’t elect Apple’s chief executive, Tim Cook, to be the sovereign of our digital world. He could change his mind.”  

We appreciate Apple for implementing ATT. The initial research indicates that it may have a welcome additional  effect for all of us who need both privacy and security against fraud.  We’d like to see more research about this connection and, of course, more companies following Apple’s lead.  

As a side note, it is important to mention that we are concerned about researchers using data from Safegraph, a company that EFF has criticized for unethical personal data collection and its PR efforts to "research wash" its practices by making that data available for free to academics. The use of this data in several academic research projects speaks to the reach of unethical data brokers as well as to the need to rein them in, both with technical measures like ATT and with robust consumer data privacy legislation.  

However, the use of this data does not take away from the credibility of the research and its conclusions. The iOS share per ZIP code could have been determined by other legitimate sources, but that would have had no effect on the results determining the impact of ATT.  

Thanks to EFF Intern Muhammad Essa for research and key drafting help with this blog post.

Speaking Freely: Alison Macrina

Par : Cindy Cohn
6 décembre 2023 à 16:02

Cohn: Alright, we’re doing a Speaking Freely Interview with Alison- Alison why don’t you say your name?

Alison Macrina, like Ballerina

Cohn: From the Library Freedom Project- and an EFF Award Winner 2023! Alright, let’s get into it. What does freedom of speech mean to you, Alison?

Well, to me it means the freedom to seek information, to use it, to speak it, but specifically without fear of retribution from those in power. And in LFP (Library Freedom Project) we’re really particularly concerned about how free speech and power relate. In the US, I think about power that comes from, not just the government, but also rich individuals and how they use their money to influence things like free speech, as well as corporations. I also think about free speech in terms of how it allows us to define the terms of public debate and conversation. And how also we can use it to question and shift the status quo to, in my view, more progressive ends. I think the best way that we can use our speech is using it to challenge and confront power. And identifying power structures. I think those power structures are really present in how we talk about speech. I’ve spent a lot of time thinking about all the big money that’s involved with shaping speech like the Koch brothers, etc, and how they’re influencing the culture wars. Which is why I think it’s really important, when I think about free speech, to think about things like social and economic justice. In LFP we talk about information democracy – that’s like the EFF Award that we got – and what that means to us is about how free expression, access, privacy, power, and justice interact. It’s about recognizing the different barriers to free expression, and what is actually being said, and also prioritizing the collective and our need to be able to criticize and hold accountable the people with power so that we can make a better world. 

Cohn: One of the things that I think the Library Freedom Project is that it’s really talking about the ability to access information as part of freedom of expression. Sometimes we only think about it as the speaking part, the part where it goes out, and I think one of the things that LFP really does is elevate the part where you get access to information which is equally, and importantly, a part of free speech. Is that something you want to talk about a little more? 

I think it’s one of the things that make libraries so special, right? It’s like what else do we have in our society that is a space that is just dedicated to information access? You know, anybody can use the library. Libraries exist in every community in the country. There’s all kinds of little sound bites about that, like, “there’s more libraries than there are McDonalds,” or, “there’s more libraries than Starbucks,” and what I think is also really unique and valuable about libraries is that they’re a public good that’s not means-tested. So in other words, they show up in poor communities, they’re in rich communities, they’re in middle-class communities. Most other public goods – if they exist – they’re only for the super, super poor. So it’s this, kind of… at it’s best… libraries can be such an equalizer. Some of the things we do in Library Freedom Project, we try to really push what the possibilities are for that kind of access. So offering trainings for librarians that expand on our understanding of free speech and access and privacy. Things like helping people understand artificial intelligence and algorithmic literacy. What are these tools? What do they mean? How do they work? Where are they at use? So helping librarians understand that so they can teach their communities about it. We try to think creatively about – what are the different kinds of technology at use in our world and how can librarians be the ones to offer better information about them in our communities? 

Cohn: What are the qualities that make you passionate about freedom of expression or freedom of speech? 

I mean it’s part of why I became a librarian. I don’t remember when or why it was what I wanted to do. I just knew it was what I wanted. I had like this sort of Loyd Dobler “say anything” moment where he’s like “I don’t want to buy anything that’s bought, sold, or made. I don’t want to sell anything that’s sold, bought, or made.” You know, I knew I wanted to do something in the public good. And I loved to read. And I loved to have an opinion and talk. And I felt like the library was the place that, not only where I could do that, but was a space that just celebrated that. And I think especially, all of the things that are happening in the world now, libraries are a place where we can really come together around ideas, we can expand our ideas, we can get introduced to ideas that are different from our own. I think that’s really extraordinary and super rare. I’ve always just really loved the library and wanted do it for my life. And so that’s why I started Library Freedom Project.

Cohn: That’s wonderful. Let’s talk a little about online speech and regulation. How do you think about online speech and regulation and how we should think about those issues? 

Well, I think we’re in a really bad position about it right now because, to my mind, there was a too-long period of inaction by these companies. And I think that now a decade or so of inaction created the conditions for a really harmful information movement. And now, it’s like, anything that we do, there’s unintended consequences. Content moderation is obviously extremely important- it’s an important public demand. I think it should be transparent and accountable. But all the ways that there are harmful information movements, everything I have seen, attempts to regulate them, have just resulted in people becoming hardened in their positions. 

This morning, for example, I was listening to the Senate Judiciary Hearings on book banning – because I’m a nerd – and it was a mess. It ended up not even really being about the book banning issue – which is a huge, huge issue in the library world – but it was all these Republican Senators talking about how horrible it was that the Biden administration was suppressing different kinds of COVID misinfo and disinfo. And they didn’t call it that, obviously, they called it “information” or “citizen science” or whatever. And it’s true that the Biden administration did do that – they made those demands of Facebook and so what were the results? It didn’t stop any of that disinformation. It didn’t change anybody’s minds about it. I think another big failure was Facebook and other companies trying to react to fake news by labeling stuff. And that was just totally laughable. And a lot of it was really wrong. You know, they were labeling all these leftwing outlets as Russian propaganda. I think that I don’t really know what the solution is to dealing with all of that. 

I think, though, that we’re at a place where the toothpaste is already so far out of the tube that I don’t know that any amount of regulation of it is going to be effective. I wish that those companies were regulated like public resources. I think that would make for a big shift. I don’t think companies should be making those kinds of decisions about speech. It’s such a huge problem, especially thinking about how it plays out for us at the local level in libraries- like because misinfo and disinfo are so popular, now we have people who request those materials from the library. And librarians have to make the decision- are we going to give in to public demand and buy this stuff or are we going to say, no, we are curators of information and we care about truth? We’re now in this position that because of this environment that’s been created outside of us, we have to respond to it. And it’s really hard- we’re also facing, relatedly, a massive rightwing assault on the library. A lot of people are familiar with this showing up as book bans, but it’s legislation, it’s taking over Boards, and all these other things. 

Cohn: What kind of situations, if any, is appropriate for governments or companies to limit speech? And I think they’re two separate questions, governments on the one hand and companies on the other. 

I think that, you know, Alex Jones should not be allowed to say that Sandyhook was a hoax – obviously, he’s facing consequences for that now. But the damage was done. Companies are tricky, because on the one hand, I think that different environments should be able to dictate the terms of how their platforms work. Like LFP is technically a company, and you’re not coming on any of my platforms and saying Nazi shit. But I also don’t want those companies to be arbiters of speech. They already are, and I think it’s a bad thing. I think that government regulation of speech we have to be really careful about. Because obviously it has the unintended consequence – or sometimes the intended consequences – are always harmful to marginalized people. 

Part of what motivated me to care about free speech is, I’ve been a political activist most of my life, on the left, and I am a big history nerd. And I paid a lot of attention to, historically, the way that leftist movements - how they’re speech has been marginalized and censored. From the Red Scare to anti-war speech. And I also look at a lot of what is happening now with the repression after the 2020 uprising, the No Cop City people just had this huge RICO indictment come down. And that is all speech repression that impacts things that I care about. And so I don’t want the government to intervene in any way there. At the same time, white supremacy is a really big problem. It has very real material effects and harms people. And one way this is a really big issue in my world, is part of the rightwing attack on libraries is, there is a bad faith free speech effort among them. They talk about free speech a lot. They talk about [how] they want their speech to be heard. But what they actually mean is, they want to create a hostile environment for other people. And so this is something that I end up feeling really torn about. Because I don’t want to see anyone go to prison for speech. I don’t want to see increased government regulation of speech. But I also think that allowing white supremacists to use the library meeting room or have their events there creates an environment where marginalized people just don’t go. I’m not sure what the responsible thing for us to do is. But I think that thinking about free speech outside of the abstract – thinking about the real material consequences that it has for people, especially in the library world – a lot of civil libertarians like to say, “you just respond with more speech.” And it’s like, well, that’s not realistic. You can’t easily do that especially when you’re talking about people who will cause some harm to these communities. One thing I do think, one reasonable speech regulation, is that I don’t think cops should be allowed to lie. And they are allowed, so we should do something about that. 

Cohn: Who is your free speech hero?

Well, okay, I have a few. Number one is so obvious that I feel like it’s trite to say, but, duh, Chelsea Manning. Everyone says Chelsea Manning, right? But we should give her her flowers again and again. Her life has been shaped by the decisions that she made about the things that she had to say in the public interest. I think that all whistleblowers in general are people that I have enormous respect for. People who know there are going to be consequences for their speech and do it anyway. And will sacrifice themselves for public good – it’s astounding. 

I also am very fortunate to be surrounded by free speech heroes all the time who are librarians. Not just in the nature of the work of the library, like the everyday normal thing, but also in the environment that we’re in right now. Because they are constantly pushing the bounds of public conversation about things like LGBT issues and racial justice and other things that are social goods, under extremely different conditions. Some of them are like, the only librarian in a rural community where, you know, the Proud Boys or the three percenters or whatever militant group is showing up to protest them, is trying to defund their library, is trying to remove them from their positions, is trying to get the very nature of the work criminalized, is trying to redefine what “obscenity” means. And these people, under those conditions, are still pushing for free speech and I think that’s amazing. 

And then the third one I’ll say is, I really try to keep an internationalist approach, and think about what the rest of the world experiences, because we really, even as challenging as things are in the US right now, we have it pretty good. So, when I was part of the Tor Project I got to go to Uganda with Tor to meet with some different human rights activists and talk to them about how they used Tor and help them with their situations. And I met all of these amazing Ugandan environmental activists who were fighting the construction of a pipeline – a huge pipeline from Tanzania to Uganda. And these are some of the world’s poorest people fighting some of the biggest corporations and Nation-States – because the US, Israel, and China all have a major interest in this pipeline. And these are people who were publishing anonymous blogs, with the use of Tor, under extreme threat. Many of them would get arrested constantly. Members of their organization would get disappeared for a few days. And they were doing it anyway, often with the knowledge that it wasn’t even going to change anything. Which just really was mind-blowing. And I stop and think about that a lot, when I think about all the issues that we have with free speech here. Because I think that those are the conditions that, honestly, most of the world is operating under, and those people are everyday heroes and they need to get their flowers. 

Cohn: Terrific, thank you Alison, for taking the time. You have articulated many of the complexities of the current place that we are and a few truths that we can hold, so thank you.

Speaking Freely: Agustina Del Campo

Par : Cindy Cohn
16 novembre 2023 à 13:09

Agustina Del Campo is the Director at the Center for Studies on Freedom of Expression and Access to Information (CELE) at the University of Palermo in Buenos Aires, Argentina. She holds a law degree from Universidad Catolica Argentina and an LL.M. in International Legal Studies from American University Washington College of Law.

Agustina has extensive experience in human rights training, particularly as it relates to freedom of expression and the press in the Inter-American human rights system. She has taught and lectured in several Latin American countries and the U.S.

EFF’s Executive Director Cindy Cohn caught up with Agustina at RightsCon 2023 in Costa Rica. In this brief but powerful exchange Agustina discusses how, though free speech has a bad rap these days, it is inherent in any advocacy agenda aimed at challenging – and changing – the status quo and existing power dynamics.

Cindy Cohn: Would you introduce yourself?

Sure, I’m Agustina Del Campo and I direct the Center for Studies on Freedom of Expression and Access to Information (CELE) in Argentina.

Cohn: First, what does free speech mean to you?

Free speech means a lot of things to me, but it basically means the power to bring unpopular ideas to the mainstream. That is what free speech means to me. It’s the power of advocating for something.

Cohn: Wonderful. How do you think online speech should or shouldn’t be regulated?

Well, I think it should or shouldn’t be regulated in the same way that offline speech should or shouldn’t be regulated. The power of speech is basically not the power to share popular ideas, but the power to share unpopular ideas, and popular ideas are online and offline and they have an impact online and offline. We’ve been discussing the limits and the possibilities and the opportunities and the challenges for speech offline for a number of years, so I think in whatever we decide to do in online speech we should at least bear in mind the discussions that we had prior to getting to this new technology and new tools.

Cohn: I know you’ve told me in the past that you’re a feminist and, obviously you live in Argentina, so you come from the Global Majority. Often we are told that free speech is a white western concept—how do you react to that accusation?

It’s interesting, in a lot of countries the freedom of expression agenda has been somewhat lost. It’s an unpopular time for freedom of expression. A lot of that unpopularity may be due to this association precisely—the freedom of expression agenda as a white male, middle-aged kind of right—and there’s a lot of anger to this place that freedom of expression has. My immediate reaction is the fact that you can have an advocacy agenda for women, for abortion rights, for anything basically, the fact that you were able to bring vulnerable populations to the mainstream conversation, the fact that we are sensitive to gender, to pronouns, to indigenous populations, to children’s need—it’s a lot the product of people fighting for the possibilities of those groups and voices to be heard. It wasn’t long ago that in my country and in my region, Latin America, there was a very conservative regime in a lot of countries where a lot of these claims that today are mainstream and popular and shared were unspeakable. You could not raise them anywhere. It is freedom of expression that has facilitated and allowed those discussions to flourish to become what they are. The fact that a lot of those agendas, the feminist agenda, the most vulnerable populations’ agendas are now really established in a lot of countries and flourishing took a lot of fighting from freedom of expression advocates so that those voices could be heard. The fact that we’re winning doesn’t mean we’ll always be. And we need to protect the guarantees and rights that allowed us to get to where we are now.

Cohn: That is so perfect. I think I just want to stop there. I wish I could put that on posters.

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

❌
❌