Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

Decentralization Reaches a Turning Point: 2024 in review

Par : Rory Mir
1 janvier 2025 à 10:39

The steady rise of decentralized networks this year is transforming social media.  Platforms like Mastodon, Bluesky, and Threads are still in their infancy but have already shown that when users are given options, innovation thrives and it results in better tools and protections for our rights online. By moving towards a digital landscape that can’t be monopolized by one big player, we also see broader improvements to network resiliency and user autonomy.

The Steady Rise of Decentralized Networks

Fediverse and Threads

The Fediverse, a wide variety of sites and services most associated with Mastodon, continued to evolve this year. Meta’s Threads began integrating with the network, marking a groundbreaking shift for the company. Only a few years ago EFF dreamed of the impact an embrace of interoperability would have for a company that is notorious for building walled gardens that trap users within its platforms. By allowing Threads users to share their posts with Mastodon and the broader fediverse (and therefore, Bluesky) without leaving their home platform, Meta is introducing millions to the benefits of interoperability. We look forward to this continued trajectory, and for a day when it is easy to move to or from Threads, and still follow and interact with the same federated community. 

Threads’ enormous user base—100 million daily active users—now dwarfs both Mastodon and Bluesky. Its integration into more open networks is a potential turning point in popularizing the decentralized social web. However, Meta’s poor reputation on privacy, moderation, and censorship, drove many Fediverse instances to preemptively block Threads, and may fragment the network..

We explored how Threads stacks up against Mastodon and Bluesky, across moderation, user autonomy, and privacy. This development highlights the promise of decentralization, but it also serves as a reminder that corporate giants may still wield outsized influence over ostensibly open systems.

Bluesky’s Explosive Growth

While Threads dominated in sheer numbers, Bluesky was this year’s breakout star. At the start of the year, Bluesky had fewer than 200,000 users and was still invite-only.  In the last few months of 2024 however the project experienced over 500% growth in just one month, and ultimately reached over 25 million users. 

Unlike Mastodon, which integrates into the Fediverse, Bluesky took a different path, building its own decentralized protocol (AT Protocol) to ensure user data and identities remain portable and users retain a “credible exit.” This innovation allows users to carry their online communities across platforms seamlessly, sparing them the frustration of rebuilding their community. Unlike the Fediverse, Bluesky has prioritized building a drop-in replacement for Twitter, and is still mostly centralized. Bluesky has a growing arsenal of tools available to users, embracing community creativity and innovation. 

While Bluesky will be mostly familiar to former Twitter users, we ran through some tips for managing your Bluesky feed, and answered some questions for people just joining the platform.

Competition Matters

Keeping the Internet Weird

The rise of decentralized platforms underscores the critical importance of competition in driving innovation. Platforms like Mastodon and Bluesky thrive because they fill gaps left by corporate giants, and encourage users to find experiences which work best for them. The traditional social media model puts up barriers so platforms can impose restrictive policies and prioritize profit over user experience. When the focus shifts to competition and a lack of central control, the internet flourishes.

Whether a user wants the community focus of Mastodon, the global megaphone of Bluesky, or something else entirely, smaller platforms let people build experiences independent of the motives of larger companies. Decentralized platforms are ultimately most accountable to their users, not advertisers or shareholders.

Making Tech Resilient

This year highlighted the dangers of concentrating too much power in the hands of a few dominant companies. A major global IT outage this summer starkly demonstrated the fragility of digital monocultures, where a single point of failure can disrupt entire industries. These failures underscore the importance of decentralization, where networks are designed to distribute risk, ensuring that no single system compromise can ripple across the globe.  

Decentralized projects like Meshtastic, which uses radio waves to provide internet connectivity in disaster scenarios, exemplify the kind of resilient infrastructure we need. However, even these innovations face threats from private interests. This year, a proposal from NextNav to claim the 900 MHz band for its own use put Meshtastic’s experimentation—and by extension, the broader potential of decentralized communication—at risk. As we discussed in our FCC comments, such moves illustrate how monopolistic power not only stifles competition but also jeopardizes resilient tools that could safeguard peoples' connectivity. 

Looking Ahead

This year saw meaningful strides toward building a decentralized, creative, and resilient internet for 2025. Interoperability and decentralization will likely continue to expand. As it does, EFF will be vigilant, watching for threats to decentralized projects and obstacles to the growth of open ecosystems.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Deepening Government Use of AI and E-Government Transition in Latin America: 2024 in Review

Policies aimed at fostering digital government processes are gaining traction in Latin America, at local and regional levels. While these initiatives can streamline access to public services, it can also make them less accessible, less clear, and put people's fundamental rights at risk. As we move forward, we must emphasize transparency and privacy guarantees during government digital transition processes.

Regional Approach to Digitalization 

In November, the Ninth Ministerial Conference on the Information Society in Latin America and the Caribbean approved the 2026 Digital Agenda for the region (eLAC 2026). This initiative unfolds within the UN Economic Commission for Latin America and the Caribbean (ECLAC), a regional cooperation forum focused on furthering the economic development of LAC countries.

One of the thematic pillars of eLAC 2026 is the digital transformation of the State, including the digitalization of government processes and services to improve efficiency, transparency, citizen participation, and accountability. The digital agenda also aims to improve digital identity systems to facilitate access to public services and promote cross-border digital services in a framework of regional integration. In this context, the agenda points out countries’ willingness to implement policies that foster information-sharing, ensuring privacy, security, and interoperability in government digital systems, with the goal of using and harnessing data for decision-making, policy design and governance.

This regional process reflects and feeds country-level initiatives that have also gained steam in Latin America in the last few years. The incentives to government digital transformation take shape against the backdrop of improving government efficiency. It is critical to qualify what efficiency means in practice. Often “efficiency” has meant budget cuts or shrinking access to public processes and benefits at the expense of fundamental rights. The promotion of fundamental rights should guide a State’s metrics as to what is efficient and successful. 

As such, while digitalization can play an important role in streamlining access to public services and facilitating the enjoyment of rights, it can also make it more complex for people to access these same services and generally interact with the State. The most vulnerable are those in greater need for that interaction to work well and those with an unusual context that often is not accommodated by the technology being used. They are also the population most prone to having scarce access to digital technologies and limited digital skills.

In addition, whereas properly integrating digital technologies into government processes and routines carries the potential to enhance transparency and civic participation, this is not a guaranteed outcome. It requires government willingness and policies oriented to these goals. Otherwise, digitalization can turn into an additional layer of complexity and distance between citizens and the State. Improving transparency and participation involves conceiving people not only as users of government services, but as participants in the design and implementation of public polices, which includes the ones related to States’ digital transition.  

Leveraging digital identity and data-interoperability systems are generally treated as a natural part of government digitalization plans. Yet, they should be taken with care. As we have highlighted, effective and robust data privacy safeguards do not necessarily come along with states’ investments in implementing these systems, despite the fact they can be expanded into a potential regime of unprecedented data tracking. Among other recommendations and redlines, it’s crucial to support each person’s right to choose to continue using physical documentation instead of going digital.

This set of concerns stresses the importance of having an underlying institutional and normative structure to uphold fundamental rights within digital transition processes. Such a structure involves solid transparency and data privacy guarantees backed by equipped and empowered oversight authorities. Still, States often neglect the crucial role of that combination. In 2024, Mexico brought us a notorious example of that. Right when the new Mexican government has taken steps to advance the country’s digital transformation, it has also moved forward to close key independent oversight authorities, like the National Institute for Transparency, Access to Information and Personal Data Protection (INAI).

Digitalization and Government Use of Algorithmic Systems for Rights-Affecting Purposes

AI strategies approved in different Latin American countries show how fostering government use of AI is an important lever to AI national plans and a component of government digitalization processes.

In October 2024, Costa Rica was the first Central American country to launch an AI strategy. One of the strategic axes, named as "Smart Government", focuses on promoting the use of AI in the public sector. The document highlights that by incorporating emerging technologies in public administration, it will be possible to optimize decision making and automate bureaucratic tasks. It also envisions the provision of personalized services to citizens, according to their specific needs. This process includes not only the automation of public services, but also the creation of smart platforms to allow a more direct interaction between citizens and government.

In turn, Brazil has updated its AI strategy and published in July the AI Plan 2024-2028. One of the axes focuses on the use of AI to improve public services. The Brazilian plan also indicates the personalization of public services by offering citizens content that is contextual, targeted, and proactive. It involves state data infrastructures and the implementation of data interoperability among government institutions. Some of the AI-based projects proposed in the plan include developing early detection of neurodegenerative diseases and a "predict and protect" system to assess the school or university trajectory of students.

Each of these actions may have potential benefits, but also come with major challenges and risks to human rights. These involve the massive amount of personal data, including sensitive data, that those systems may process and cross-reference to provide personalized services, potential biases and disproportionate data processing in risk assessment systems, as well as incentives towards a problematic assumption that automation can replace human-to-human interaction between governments and their population. Choices about how to collect data and which technologies to adopt are ultimately political, although they are generally treated as technical and distant from political discussion. 

An important basic step relates to government transparency about the AI systems either in use by public institutions or part of pilot programs. Transparency that, at a minimum, should range from actively informing people that these systems exist, with critical details on their design and operation, to qualified information and indicators about their results and impacts. 

Despite the increasing adoption of algorithmic systems by public bodies in Latin America (for instance, a 2023's research mapped 113 government ADM systems in use in Colombia), robust transparency initiatives are only in its infancy. Chile stands out in that regard with its repository of public algorithms, while Brazil launched the Brazilian AI Observatory (OBIA) in 2024. Similar to the regional ILIA (Latin American Artificial Intelligence Index), OBIA features meaningful data to measure the state of adoption and development of AI systems in Brazil but still doesn't contain detailed information about AI-based systems in use by government entities.

The most challenging and controversial application from a human-rights and accountability standpoint is government use of AI in security-related activities.

Government surveillance and emerging technologies

During 2024, Argentina's new administration, under President Javier Milei, passed a set of acts regulating its police forces' cyber and AI surveillance capacities. One of them, issued in May, stipulates how police forces must conduct "cyberpatrolling", or Open-Source Intelligence (OSINT), for preventing crimes. OSINT activities do not necessarily entail the use of AI, but have increasingly integrated AI models as they facilitate the analysis of huge amounts of data. While OSINT has important and legitimate uses, including for investigative journalism, its application for government surveillance purposes has raised many concerns and led to abuses

Another regulation issued in July created the "Unit of Artificial Intelligence Applied to Security" (UIAAS). The powers of the new agency include “patrolling open social networks, applications and Internet sites” as well as “using machine learning algorithms to analyze historical crime data and thus predict future crimes”. Civil society organizations in Argentina, such as Observatorio de Derecho Informático Argentino, Fundación Vía Libre, and Access Now, have gone to courts to enforce their right to access information about the new unit created.

The persistent opacity and lack of effective remedies to abuses in government use of digital surveillance technologies in the region called action from the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR). The Office of the Special Rapporteur carried out a consultation to receive inputs about digital-powered surveillance abuses, the state of digital surveillance legislation, the reach of the private surveillance market in the region, transparency and accountability challenges, as well as gaps and best-practice recommendations. EFF has joined expert interviews and submitted comments in the consultation process. The final report will be published next year with important analysis and recommendations.

Moving Forward: Building on Inter-American Human Rights Standards for a Proper Government Use of AI in Latin America

Considering this broader context of challenges, we launched a comprehensive report on the application of Inter-American Human Rights Standards to government use of algorithmic systems for rights-based determinations. Delving into Inter-American Court decisions and IACHR reports, we provide guidance on what state institutions must consider when assessing whether and how to deploy AI and ADM systems for determinations potentially affecting people's rights.

We detailed what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explained why this adoption must meet necessary and proportionate principles, and what this entails. We highlighted what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment. We elaborated on human rights implications building off key rights enshrined in the American Convention on Human Rights and the Protocol of San Salvador, setting up an operational framework for their due application.

Based on the report, we have connected to oversight institutions, joining trainings for public prosecutors in Mexico and strengthening ties with the Public Defender's Office in the state of São Paulo, Brazil. Our goal is to provide inputs for their adequate adoption of AI/ADM systems and for fulfilling their role as public interest entities regarding government use of algorithmic systems more broadly. 

Enhancing public oversight of state deployment of rights-affecting technologies in a context of marked government digitalization is essential for democratic policy making and human-rights aligned government action. Civil society also plays a critical role, and we will keep working to raise awareness about potential impacts, pushing for rights to be fortified, not eroded, throughout the way.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Kids Online Safety Act Continues to Threaten Our Rights Online: 2024 in Review

Par : Jason Kelley
1 janvier 2025 à 10:26

At times this year, it seemed that Congress was going to give up its duty to protect our rights online—particularly when the Senate passed the dangerous Kids Online Safety Act (KOSA) by a large majority in July. But this legislation, which would chill protected speech and almost certainly result in privacy-invasive age verification requirements for many users to access social media sites, did not pass the House this year, thanks to strong opposition from EFF supporters and others.  

KOSA, first introduced in 2022, would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to content. Congress introduced a number of versions of the bill this year, and we analyzed each of them. Unfortunately, the threat of this legislation still looms over us as we head into 2025, especially now that the bill has passed the Senate. And just a few weeks ago, its authors introduced an amended version to respond to criticisms from some House members.  

Despite its many amendments in 2024, we continue to oppose KOSA. No matter which version becomes final, the bill will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms it identifies.   

Here’s how, and why, we worked to stop KOSA this year, and where the fight stands now.  

New Versions, Same Problems

The biggest problem with KOSA is in its vague “duty of care” requirements. Imposing a duty of care on a broad swath of online services, and requiring them to mitigate specific harms based on the content of online speech, will result in those services imposing age verification and content restrictions. We’ve been critical of KOSA for this reason since it was introduced in 2022. 

In February, KOSA's authors in the Senate released an amended version of the bill, in part as a response to criticisms from EFF and other groups. The updates changed how KOSA regulates design elements of online services and removed some enforcement mechanisms, but didn’t significantly change the duty of care, or the bill’s main effects. The updated version of KOSA would still create a censorship regime that would harm a large number of minors who have First Amendment rights to access lawful speech online, and force users of all ages to verify their identities to access that same speech, as we wrote at the time.  KOSA’s requirements are comparable to cases in which the government tried to prevent booksellers from disseminating certain books; those attempts were found unconstitutional  

Kids Speak Out

The young people who KOSA supporters claim they’re trying to help have spoken up about the bill. In March, we published the results of a survey of young people who gave detailed reasons for their opposition to the bill. Thousands told us how beneficial access to social media platforms has been for them, and why they feared KOSA’s censorship. Too often we’re not hearing from minors in these debates at allbut we should be, because they will be most heavily impacted if KOSA becomes law.  

Young people told us that KOSA would negatively impact their artistic education, their ability to find community online, their opportunity for self-discovery, and the ways that they learn accurate news and other information. To sample just a few of the comments: Alan, a fifteen-year old, wrote,  

I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for!  

More Recent Changes To KOSA Haven’t Made It Better 

In May, the U.S. House introduced a companion version to the Senate bill. This House version modified the bill around the edges, but failed to resolve its fundamental censorship problems. The primary difference in the House version was to create tiers that change how the law would apply to a company, depending on its size.  

These are insignificant changes, given that most online speech happens on just a handful of the biggest platforms. Those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care and would be held to the strictest knowledge standard. 

The other major shift was to update the definition of “compulsive usage” by suggesting it be linked to the Diagnostic and Statistical Manual of Mental Disorders, or DSM. But simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. 

KOSA Passes the Senate

KOSA passed through the Senate in July, though legislators on both sides of the aisle remain critical of the bill.  

A version of KOSA introduced in September, tinkered with the bill again but did not change the censorship requirements. This version replaced language about anxiety and depression with a requirement that apps and websites prevent “serious emotional disturbance.”  

In December, the Senate released yet another version of the bill—this one written with the assistance of X CEO Linda Yaccarino. This version includes a throwaway line about protecting the viewpoint of users as long as those viewpoints are “protected by the First Amendment to the Constitution of the United States.” But user viewpoints were never threatened by KOSA; rather, the bill has always meant to threaten the hosts of the user speech—and it still does.  =

KOSA would allow the FTC to exert control over online speech, and there’s no reason to think the incoming FTC won’t use that power. The nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has promised to protect free speech by “fighting back against the trans agenda,” among other things. KOSA would give the FTC under this or any future administration wide berth to decide what sort of content should be restricted because they view it as harmful to kids. And even if it’s never even enforced, just passing KOSA would likely result in platforms taking down protected speech.  

If KOSA passes, we’re also concerned that it would lead to mandatory age verification on apps and websites. Such requirements have their own serious privacy problems; you can read more about our efforts this year to oppose mandatory online ID in the U.S. and internationally.   

EFF thanks our supporters, who have sent nearly 50,000 messages to Congress on this topic, for helping us oppose KOSA this year. In 2025, we will continue to rally to protect privacy and free speech online.   

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

AI and Policing: 2024 in Review

31 décembre 2024 à 10:02

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting Online ID Mandates: 2024 In Review

31 décembre 2024 à 10:02

This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts because they censor the internet and burden access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.

Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation” methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3, and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users are minors without imposing age verification?

EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.

Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric surveillance just to access lawful online speech.

EFF’s Work Opposing State Age Verification Bills

Last year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the states have passed at least one of these proposals into law.

Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional, confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in Ohio, Indiana, Utah, and Mississippi enjoined those states’ age verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all users—without restricting their speech.

Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.

California

In January, we submitted public comments opposing an especially vague and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.

In February, we filed a friend-of-the-court brief arguing that California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the AADC’s age-verification provision specifically.

Later in the year, we also filed a letter to California lawmakers opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We declared victory in September when the bill failed to get passed by the legislature.

New York

Similarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as various experts have reported.

Texas

We also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.

In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.

Mississippi

Finally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.

In our June brief for the district court, we once again explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and heavily cited our amicus brief.

Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical resources and support for the very same harms these bills claim to address. In our brief, we urged the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on for vibrant self-expression and crucial support.

Looking Ahead

As 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are already on file.

EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Federal Regulators Limit Location Brokers from Selling Your Whereabouts: 2024 in Review

31 décembre 2024 à 10:02

The opening and closing months of 2024 saw federal enforcement against a number of location data brokers that track and sell users’ whereabouts through apps installed on their smartphones. In January, the Federal Trade Commission brought successful enforcement actions against X-Mode Social and InMarket, banning the companies from selling precise location data—a first prohibition of this kind for the FTC. And in December, the FTC widened its net to two additional companies—Gravy Analytics (Venntel) and Mobilewalla—barring them from selling or disclosing location data on users visiting sensitive areas such as reproductive health clinics or places of worship. In previous years, the FTC has sued location brokers such as Kochava, but the invasive practices of these companies have only gotten worse. Seeing the federal government ramp up enforcement is a welcome development for 2024.

As regulators have clearly stated, location information is sensitive personal information. Companies can glean location information from your smartphone in a number of ways. Apps that include Software Development Kits (SDKs) from some companies will instruct the app to send back troves of sensitive information for analytical insights or debugging purposes. The data brokers may offer market insights or financial incentives for app developers to include their SDKs. Other companies will not ask apps to directly include their SDKs, but will participate in Real-Time Bidding (RTB) auctions, placing bids for ad-space on devices in locations they specify. Even if they lose the auction, they can glean valuable device location information just by participating. Often, apps will ask for permissions such as location data for legitimate reasons aligned with the purpose of the app: for example, a price comparison app might use your whereabouts to show you the cheapest vendor of a product you’re interested in for your area. What you aren’t told is that your location is also shared with companies tracking you.

A number of revelations this year gave us better insight into how the location data broker industry works, revealing the inner-workings of powerful tools such as Locate X, which allows even those claiming to work with law enforcement at some point in the future to access troves of mobile location data across the planet. The mobile location tracking company FOG Data Science, which in 2022 EFF revealed to be selling troves of information to local police, was this year found also to be soliciting law enforcement for information on the doctors of suspects in order to track them via their doctor visits.

A number of revelations this year gave us better insight into how the location data broker industry works

EFF detailed how these tools can be stymied via technical means, such as changing a few key settings on your mobile device to disallow data brokers from linking your location across space and time. We further outlined legislative avenues to ensure structural safeguards are put in place to protect us all from an out-of-control predatory data industry.

In addition to FTC action, the Consumer Financial Protection Bureau proposed a new rule meant to crack down on the data broker industry. As the CFPB mentioned, data brokers compile highly sensitive information—like information about a consumer's finances, the apps they use, and their location throughout the day. The rule would include stronger consent requirements and protections for personal data that has been purportedly de-identified. Given the abuses the announcement cites, including the distribution and sale of “detailed personal information about military service members, veterans, government employees, and other Americans,” we hope to see adoption and enforcement of this proposed rule in 2025.

This year has seen a strong regulatory appetite to protect consumers from harms which in bygone years would have seemed unimaginable: detailed records on the movements of nearly everyone, packaged and made available for pennies. We hope 2025 continues this appetite to address the dangers of location data brokers.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Exposing Surveillance at the U.S.-Mexico Border: 2024 Year in Review in Pictures

Par : Dave Maass
30 décembre 2024 à 10:53

Some of the most picturesque landscapes in the United States can be found along the border with Mexico. Yet, from San Diego’s beaches to the Sonoran Desert, from Big Bend National Park to the Boca Chica wetlands, we see vistas marred by the sinister spread of surveillance technology, courtesy of the federal government.  

EFF refuses to let this blight grow without documenting it, exposing it, and finding ways to fight back alongside the communities that live in the shadow of this technological threat to human rights.  

Here’s a galley of images representing our work and the new developments we’ve discovered in border surveillance in 2024.  

1. Mapping Border Surveillance  

A map of the US-Mexico border, with dots representing surveillance towers.

EFF’s stand-up display of surveillance at the US-Mexico border. Source: EFF

EFF published the first iteration of our map of surveillance towers at the U.S.-Mexico border in Spring 2023, having pinpointed the precise location of 290 towers, a fraction of what we knew might be out there. A year-and-a -half later, with the help of local residents, researchers, and search-and-rescue groups, our map now includes more than 500 towers.  

In many cases, the towers are brand new, with some going up as recently as this fall. We’ve also added the location of surveillance aerostats, checkpoint license plate readers, and face recognition at land ports of entry. 

In addition to our online map, we also created a 10’ x 7’ display that we debuted at “Regardless of Frontiers: The First Amendment and the Exchange of Ideas Across Borders,” a symposium held by the Knight First Amendment Institute at Columbia University in October. If your institution would be interested in hosting it, please email us at aos@eff.org

2. Infrastructures of Control

An overhead view of a University of Arizona courtyard with a model surveillance tower and mounted photographs of surveillance technology. A person looks at one of the displays.

The Infrastructures of Control exhibit at University of Arizona. Source: EFF

Two University of Arizona geographers—Colter Thomas and Dugan Meyer—used our map to explore the border, driving on dirt roads and hiking in the desert, to document the infrastructure that comprises the so-called “virtual wall.” The result: “Infrastructures of Control,” a photography exhibit in April at the University of Arizona that also included a near-actual size replica of an “autonomous surveillance tower.”   

You can read our interview with Thomas and Meyer here.

3. An Old Tower, a New Lease in Calexico 

A surveillance tower over a one-story home.

A remote video surveillance system in Calexico, Calif. Source: EFF

Way back in 2000, the Immigration and Naturalization Service—which oversaw border security prior to the creation of Customs and Border Protection (CBP) within the Department of Homeland Security (DHS) — leased a small square of land in a public park in Calexico, Calif., where it then installed one of the earliest border surveillance towers. The lease lapsed in 2020 and with plans for a massive surveillance upgrade looming, CBP rushed to try to renew the lease this year. 

This was especially concerning because of CBP’s new strategy of combining artificial intelligence with border camera feeds.  So EFF teamed up with the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition to try to convince the Calexico City Council to either reject the lease or demand that CBP enact better privacy protections for residents in the neighboring community and children playing in Nosotros Park. Unfortunately, local politics were not in our favor. However, resisting border surveillance is a long game, and EFF considers it a victory that this tower even got a public debate at all. 

4. Aerostats Up in the Air 

A white blimp on the ground in the middle of the desert.

The Tactical Aerostat System at Santa Teresa Station. Source: Battalion Search and Rescue (CC BY)

CBP seems incapable of developing a coherent strategy when it comes to tactical aerostats—tethered blimps equipped with long-range, high-definition cameras. In 2021, the agency said it wanted to cancel the program, which involved four aerostats in the Rio Grande Valley, before reversing itself. Then in 2022, CBP launched new aerostats in Nogales, Ariz. and Columbus, N.M. and announced plans to launch 17 more within a year.  

But by 2023, CBP had left the program out of its proposed budget, saying the aerostats would be decommissioned. 

And yet, in fall 2024, CBP launched a new aerostat at the Santa Teresa Border Patrol Station in New Mexico. Our friends at Battalion Search & Rescue gathered photo evidence for us. Soon after, CBP issued a new solicitation for the aerostat program and a member of Congress told Border Report that the aerostats may be upgraded and as many as 12 new ones may be acquired by CBP via the Department of Defense.

Meanwhile, one of CBP’s larger Tethered Aerostats Radar Systems in Eagle Pass, Texas was down for most of the year after deflating in high winds. CBP has reportedly not been interested in paying hundreds of thousands of dollars to get it up again.   

5. New Surveillance in Southern Arizona

A desert scene with a rust-colored pole surrounded by razor wire.

A Buckeye Camera on a pole along the border fence near Sasabe, Ariz. Source: EFF

Buckeye Cameras are motion-triggered cameras that were originally designed for hunters and ranchers to spot wildlife, but border enforcement authorities—both federal and state/local—realized years ago that they could be used to photograph people crossing the border. These cameras are often camouflaged (e.g. hidden in trees, disguised as garbage, or coated in sand).  

Now, CBP is expanding their use of Buckeye Cameras. During a trip to Sasabe, Ariz., we discovered the CBP is now placing Buckeye Cameras in checkpoints, welding them to the border fence, and installing metal poles, wrapped in concertina wire, with Buckeye Cameras at the top.

A zoomed-in image of a surveillance tower on a desert hill.

A surveillance tower along the highway west of Tucson. Source: EFF

On that same trip to Southern Arizona, EFF (along with the Infrastructures of Control geographers) passed through a checkpoint west of Tucson, where previously we had identified a relocatable surveillance tower. But this time it was gone. Why, we wondered? Our question was answered just a minute or two later, when we spotted a new surveillance tower on a nearby hill-top, a new model that we had not previously seen deployed in the wild.  

6. Artificial Intelligence  

An overly complicated graphic of surveillance towers, blimps, and other technoogies

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

CBP and other agencies regularly hold “Industry Days” to brief contractors on the new technology and capabilities the agency may want to buy in the near future. In January, EFF attended one such  Industry Day” designed to bring tech vendors up-to-speed on the government’s horrific vision of a border secured by artificial intelligence (see the graphic above for an example of that vision). 

A complicated graphic illustrating how technology tracks someone crossing the border.

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

At this event, CBP released the convoluted flow chart above as part of slide show. Since it’s so difficult to parse, here’s the best sense we can make out of it:. When someone crosses the border, it triggers an unattended ground sensor (UGS), and then a camera autonomously detects, identifies, classifies and tracks the person, handing them off camera to camera, and the AI system eventually alerts Border Patrol to dispatch someone to intercept them for detention. 

7. Congress in Virtual Reality

A member of Congress sitting at a table with a white Meta Quest 2 headset strapped to his head.

Rep. Scott Peters on our VR tour of the border. Source: Peters’ Instagram

We search for surveillance on the ground. We search for it in public records. We search for it in satellite imagery. But we’ve also learned we can use virtual reality in combination with Google Streetview not only to investigate surveillance, but also to introduce policymakers to the realities of the policies they pass. This year, we gave Rep. Scott Peters (D-San Diego) and his team a tour of surveillance at the border in VR, highlighting the impact on communities.  

“[EFF] reminded me of the importance of considering cost-effectiveness and Americans’ privacy rights,” Peters wrote afterward in a social media. 

We also took members of Rep. Mark Amodei’s (R-Reno) district staff on a similar tour. Other Congressional staffers should contact us at aos@eff.org if you’d like to try it out.  

Learn more about how EFF uses VR to research the border in this interview and this lightning talk 

8. Indexing Border Tech Companies 

A militarized off-road vehicle in an exhibition hall.

An HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)

In partnership with the Heinrich Böll Foundation, EFF and University of Nevada, Reno student journalist Andrew Zuker built a dataset of hundreds of vendors marketing technology to the U.S. Department of Homeland Security. As part of this research, Zuker journeyed to El Paso, Texas for the Border Security Expo, where he systematically gathered information from all the companies promoting their surveillance tools. You can read Zuker’s firsthand report here.

9. Plataforma Centinela Inches Skyward 

A small surveillance trailer outside a retail store in Ciudad Juarez.

An Escorpión unit, part of the state of Chihuahua’s Plataforma Centinela project. Source: EFF

In fall 2023, EFF released its report on the Plataforma Centinela, a massive surveillance network being built by the Mexican state of Chihuahua in Ciudad Juarez that will include 10,000+ cameras, face recognition, artificial intelligence, and tablets that police can use to access all this data from the field. At its center is the Torre Centinela, a 20-story headquarters that was supposed to be completed in 2024 

A construction site with a crane and the first few floors of a skyscraper.

The site of the Torre Centinela in downtown Ciudad Juarez. Source: EFF

We visited Ciudad Juarez in May 2024 and saw that indeed, new cameras had been installed along roadways, and the government had begun using “Escorpión” mobile surveillance units, but the tower was far from being completed. A reporter who visited in November confirmed that not much more progress had been made, although officials claim that the system will be fully operational in 2025.

10. EFF’s Border Surveillance Zine 

Two copies of the purple-themed Border Surveillance Technology zine.

Do you want to review even more photos of surveillance that can be found at the border, whether they’re planted in the ground, installed by the side of the road, or floating in the air? Download EFF’s new zine in English or Spanish—or if you live a work in the border region, email us as aos@eff.org and we’ll mail you hard copies.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting Automated Oppression: 2024 in Review

Par : Kit Walsh
30 décembre 2024 à 10:50

EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.

This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult.

If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.

It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the kind of public involvement that a rule change would ordinarily entail. This, of course, increases the likelihood that vulnerable members of the public will be harmed and that technologies will be adopted without meaningful vetting. While there may be positive use cases for machine learning to analyze government processes and phenomena in the world, making decisions about people is one of the worst applications of this technology, one that entrenches existing injustice and creates new, hard-to-discover errors that can ruin lives.

Vendors of ADM have been riding a wave of AI hype, and police, border authorities, and spy agencies have gleefully thrown taxpayer money at products that make it harder to hold them accountable while being unproven at offering any other ‘benefit.’ We’ve written about the use of generative AI to write police reports based on the audio from bodycam footage, flagged how national security use of AI is a threat to transparency, and called for an end to AI Use in Immigration Decisions.

The hype around AI and the allure of ADMs has further incentivized the collection of more and more user data.

The private sector is also deploying ADM to make decisions about people’s access to employment, housing, medicine, and more. People have an intuitive understanding of some of the risks this poses, with most Americans expressing discomfort about the use of AI in these contexts. Companies can make a quick buck firing people and demanding the remaining workers figure out how to implement snake-oil ADM tools to make these decisions faster, though it’s becoming increasingly clear that this isn’t delivering the promised productivity gains.

ADM can, however, help a company avoid being caught making discriminatory decisions that violate civil rights laws—one reason why we support mechanisms to prevent unlawful private discrimination using ADM. Finally, the hype around AI and the allure of ADMs has further incentivized the collection and monetization of more and more user data and more invasions of privacy online, part of why we continue to push for a privacy-first approach to many of the harmful applications of these technologies.

In EFF’s podcast episode on AI, we discussed some of the challenges posed by AI and some of the positive applications this technology can have when it’s not used at the expense of people’s human rights, well-being, and the environment. Unless something dramatically changes, though, using AI to make decisions about human beings is unfortunately doing a lot more harm than good.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

State Legislatures Are The Frontline for Tech Policy: 2024 in Review

State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.

This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.

Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.

EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.

States Continue to Experiment

Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.

For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.

We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.

On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.

Looking Ahead to 2025

2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms. 

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review

29 décembre 2024 à 06:01

Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?

As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated Congress, state legislatures, the U.S. Supreme Court, and the European Union.

EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful censorship strategies.  

The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.

Read the Report

Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been able to do this year thanks to your help.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Aerial and Drone Surveillance: 2024 in Review

Par : Hannah Zhao
29 décembre 2024 à 05:50

We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.

Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced. 

This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that. 

Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.  

 Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public. 

Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies. 

Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons. 

As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review

29 décembre 2024 à 05:50

This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.

Internet shutdowns

It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.

Repression of speech in times of conflict

But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.

Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.

In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.

Restrictions on content, age, and identity

Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.

While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.  

One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.

Cybercrime

We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Cars (and Drivers): 2024 in Review

28 décembre 2024 à 15:18

If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras, GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.

Cars Sure Are Sharing a Lot of Information

While we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was sharing information about driver’s habits with insurance companies without consent.

It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let alone opt out of data sharing.

Which is why we were happy to see Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.

Advocating for Better Bills to Protect Abuse Survivors

The amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.

This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.

But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later" framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.

It’s Not Just the Car Itself

Because of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.

This year we fought against A.B. 3138 in California, which proposed adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important issue that we’ll fight for.

We wrote about a bulletin released by the U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates.

Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy problems—and put together an FAQ to help you decide if you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their privacy and security, and their potential use for government-mandated age verification on the internet.

The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and increasingly necessary, for cars, too.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Behind the Diner—Digital Rights Bytes: 2024 in Review

28 décembre 2024 à 15:00

Although it feels a bit weird to be writing a year in review post for a site that hasn’t even been live for three months, I thought it would be fun to give a behind-the-scenes look at the work we did this year to build EFF’s newest site, Digital Rights Bytes. 

Since each topic Digital Rights Bytes aims to tackle is in the form of a question, why not do this Q&A style? 

Q: WHAT IS DIGITAL RIGHTS BYTES?

Great question! At its core, Digital Rights Bytes is a place where you can get honest answers to the questions that have been bugging you about technology. 

The site was originally pitched as ‘EFF University’ (or EFFU, pun intended) to help folks who aren’t part of our core tech-savvy base get up-to-speed on technology issues that may be affecting their everyday lives. We really wanted Digital Rights Bytes to be a place where newbies could feel safe learning about internet freedom issues, get familiar with EFF’s work, and find out how to get involved, without feeling too intimidated. 

Q: WHY DOES THE SITE LOOK SO DIFFERENT FROM OTHER EFF WORK?

With our main goal of attracting new readers, it was crucial to brand Digital Rights Bytes differently from other EFF projects. We wanted Digital Rights Bytes to feel like a place where you and your friend might casually chat over milkshakeswhile being served pancakes by a friendly robot. We took that concept and ran with it, going forward with a full diner theme for the site. I mean, imagine the counter banter you could have at the Digital Rights Bytes Diner!

Digital Rights Bytes banner image featuring animals at a diner counter

Take a look at the Digital Rights Bytes counter!

As part of this concept, we thought it made sense for each topic to be framed as a question. Of course, at EFF, we get a ton of questions from supporters and other folks online about internet freedom issues, including from our own family and friends. We took some of the questions we see fairly often, then decided which would be the most important—and most interesting—to answer.  

The diner concept is why the site has a bright neon logo, pink and cyan colors, and a neat vintage looking background on desktop. Even the gif that plays on the home screen of Digital Rights Bytes shows our animal characters chatting ‘round the diner (more on them soon!) 

Q: WHY DID YOU MAKE DIGITAL RIGHTS BYTES?

Here’s the thing: technology continues to expand, evolve, and change—and it’s tough to keep up! We’ve all been the tech noob, trying to figure out why our devices behave the way they do, and it can be pretty overwhelming.  

So, we thought that we could help out with that! And what better way to help educate newcomers than explaining these tech issues in short byte-sized videos: 

Clip from the Right to Repair video of Digital Rights Bytes

A clip from the device repair video.

It took some time to nail down the style for the videos on Digital Rights Bytes. But, after some trial and error, we landed on using animals as our lead characters. A) because they’re adorable. B) because it helped further emphasize the shadowy figures that were often trying to steal their data or make their tech worse for them. It’s often unclear who is trying to steal our data or rig tech to be worse for the user, so we thought this was fitting. 

In addition to the videos, EFF issue experts wrote concise and easy to read pages further detailing the topic, with an emphasis on linking to other experts and including information on how you can get involved. 

Q: HAS DIGITAL RIGHTS BYTES BEEN SUCCESSFUL?

You tell us! If you’re reading these Year In Review blog posts, you’re probably the designated “ask them every tech question in the world” person of your family. Why not send your family and friends over to Digital Rights Bytes and let us know if the site has been helpful to them! 

We’re also looking to expand the site and answer more common questions you and I might hear. If you have suggestions, you should let us know here or on social media! Just use the hashtag #DigitalRightsBytes and we’ll be sure to consider it. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

NSA Surveillance and Section 702 of FISA: 2024 in Review

28 décembre 2024 à 14:58

Mass surveillance authority Section 702 of FISA, which allows the government to collect international communications, many of which happen to have one side in the United States, has been renewed several times since its creation with the passage of the 2008 FISA Amendments Act. This law has been an incessant threat to privacy for over a decade because the FBI operates on the “finders keepers” rule of surveillance which means that it thinks because the NSA has “incidentally” collected the US-side of conversations it is not free to sift through them without a warrant.

But 2024 became the year this mass surveillance authority was not only reauthorized by a lion’s share of both Democrats and Republicans—it was also the year the law got worse. 

After a tense fight, some temporary reauthorizations, and a looming expiration, Congress finally passed the Reforming Intelligence and Securing America Act (RISAA) in April, 20204. RISAA not only reauthorized the mass surveillance capabilities of Section 702 without any of the necessary reforms that had been floated in previous bills, it also enhanced its powers by expanding what it can be used for and who has to adhere to the government’s requests for data.

Where Section 702 was enacted under the guise of targeting people not on U.S. soil to assist with national security investigations, there are not such narrow limits on the use of communications acquired under the mass surveillance law. Following the passage of RISAA, this private information can now be used to vet immigration and asylum seekers and conduct intelligence for broadly construed “counter narcotics” purposes.

The bill also included an expanded definition of “Electronic Communications Service Provider” or ECSP. Under Section 702, anyone who oversees the storage or transmission of electronic communications—be it emails, text messages, or other online data—must cooperate with the federal government’s requests to hand over data. Under expanded definitions of ECSP there are intense and well-realized fears that anyone who hosts servers, websites, or provides internet to customers—or even just people who work in the same building as these providers—might be forced to become a tool of the surveillance state. As of December 2024, the fight is still on in Congress to clarify, narrow, and reform the definition of ECSP.

The one merciful change that occurred as a result of the 2024 smackdown over Section 702’s renewal was that it only lasts two years. That means in Spring 2026 we have to be ready to fight again to bring meaningful change, transparency, and restriction to Big Brother’s favorite law.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Global Age Verification Measures: 2024 in Review

27 décembre 2024 à 13:29

EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.

Kids Experiencing Harm is Not Just an Online Phenomena

In November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.

No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.

The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens. 

Limiting Access for Kids Limits Access for Everyone 

Through this wave of age verification bills, governments around the world are burdening internet users and forcing them to sacrifice their anonymity, privacy, and security simply to access lawful speech. For adults, this is true even if that speech constitutes sexual or explicit content. These laws are censorship laws, and rules banning  sexual content usually hurt marginalized communities and groups that serve them the most. History shows that over-censorship is inevitable.

This year, Canada also introduced an age verification measure, bill S-210, which seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. This was introduced to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.” But requiring people of all ages to show ID to get online won’t help women or young people. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. This creates a legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not. 

Without Comprehensive Privacy Protections, These Bills Exacerbate Data Surveillance 

Under mandatory age verification requirements, users will have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. Millions of adult internet users would also be entirely blocked from accessing protected speech online because they are not in possession of the required form of ID

Online age verification is not like flashing an ID card in person to buy particular physical items. In places that lack comprehensive data privacy legislation, the risk of surveillance is extensive. First, a person who submits identifying information online can never be sure if websites will keep that information, or how that information might be used or disclosed. Without requiring all parties who may have access to the data to delete that data, such as third-party intermediaries, data brokers, or advertisers, users are left highly vulnerable to data breaches and other security harms at companies responsible for storing or processing sensitive documents like drivers’ licenses. 

Second, and unlike in-person age-gates, the most common way for websites to comply with a potential verification system would be to require all users to upload and submit—not just momentarily display—a data-rich government-issued ID or other document with personal identifying information. In a brief to a U.S. court, EFF explained how this leads to a host of serious anonymity, privacy, and security concerns. People shouldn't have to disclose to the government what websites they're looking at—which could reveal sexual preferences or other extremely private information—in order to get information from that website. 

These proposals are coming to the U.S. as well. We analyzed various age verification methods in comments to the New York Attorney General. None of them are both accurate and privacy-protective. 

The Scramble to Find an Effective Age Verification Method Shows There Isn't One

The European Commission is also currently working on guidelines for the implementation of the child safety article of the Digital Services Act (Article 28) and may come up with criteria for effective age verification. In parallel, the Commission has asked for proposals for a 'mini EU ID wallet' to implement device-level age verification ahead of the expected roll out of digital identities across the EU in 2026. At the same time, smaller social media companies and dating platforms have for years been arguing that age verification should take place at the device or app-store level, and will likely support the Commission's plans. As we move into 2025, EFF will continue to follow these developments as the Commission’s apparent expectation on porn platforms to adopt age verification to comply with their risk mitigation obligations under the DSA becomes clearer.

Mandatory age verification is the wrong approach to protecting young people online. In 2025, EFF will continue urging politicians around the globe to acknowledge these shortcomings, and to explore less invasive approaches to protecting all people from online harms

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

27 décembre 2024 à 13:29

The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.

Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.

But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.

The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate).  The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.

There’s a host of problems with these bills, and you can read more about them here and here. 

A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. 

And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.  

Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.

These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction. 

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Electronic Frontier Alliance Fought and Taught Locally: 2024 in Review

The EFF-chaired Electronic Frontier Alliance (EFA) has had a big year! EFA is a loose network of local groups fighting for digital rights in the United States. With an ever-increasing roster of allies across the country, including significant growth on university campuses, EFA has also undergone a bit of a facelift. With the new branding comes more resources to support local organizing and popular education efforts around the country. 

If you’re a member of a local or state group in the United States that believes in digital rights, please learn more at our FAQ page. EFA groups include hackers, advocates, security educators, tech professionals, activists young and old, and beyond. If you think it would make a good fit, please fill out an application here. The Alliance has scores of members, which all did great work this year. This review highlights just a few.

A new look for EFA 

This past July, the organizing team completed the much needed EFA rebrand project with a brand new website. Thanks to the work of EFF’s Engineering and Design team, organizers now have up-to-date merch, pamphlets, and useful organizer toolkits for alliance members with a range of levels of organizing experience. Whether your group wants to lead advocacy letters to groups needing basic press strategies or organizing on college campuses we have resources to help. We also updated our allies directory to better showcase our local members, and make it easier for activists to find groups and get involved. We also put together a bluesky starter kit to make it easy to follow members into the new platform. This is a major milestone in our effort to build useful resources for the network, which we will continue to maintain and expand in the years ahead. 

More local groups heeded the call:

The alliance continued to grow, especially on college campuses, creating opportunities for some fun cross-campus collaborations in the year ahead. This year, nine local groups across eight states joined up: 

  • Stop Surveillance City, Seattle, WA: Stop Surveillance City is fighting against mass surveillance, criminalization and incarceration, and further divestment from people-centered policies. They advocate for investment in their communities and want to increase programs that address the root causes of violence. 
  • Cyber Security Club @FSU, Tallahassee, FL: The Cyber Security Club is a student group sharing resources to get students started in cybersecurity and competing in digital Capture the Flag (CTF) competitions. 
  • UF Student Infosec Team (UFSIT), Gainesville, FL: UFSIT is the cybersecurity club at the University of Florida. They are student-led and passionate about all things cybersecurity, and their goal is to provide a welcoming environment for students to learn more about all areas of information security, including penetration testing, reverse engineering, vulnerability research, digital forensics, and more. 
  • NICC, Newark, NJ: NICC is the New Jersey Institute of Technology’s official information & cybersecurity club. As a student-run organization, NICC started as a way to give NJIT students interested in cybersecurity, ethical hacking, and CTFs a group that would help them learn, grow, and hone their skills. 
  • DC919, Raleigh, NC: DEF CON Group 919 is a community group in the Raleigh/Durham area of North Carolina, providing a gathering place for hacking discussions, conference support, and workshop testing. 
  • Community Broadband PDX, Portland, OR: Their mission is to guide Portlanders to create a new option for fast internet access: publicly-owned and transparently operated, affordable, secure, fast, and reliable broadband infrastructure that is always available to every neighborhood and community. 
  • DC215, Philadelphia, PA: DC215 is another DEF CON group advancing knowledge and education with those interested in science, technology, and other areas of information security through project collaborations, group gatherings, and group activities to serve their city. 
  • Open Austin, Austin, TX: Open Austin's mission is to end disparities in Austin in access to technology. It envisions a city that respects and serves all people, by facilitating issues-based conversations between government and city residents, providing service to local community groups that leverage local expertise, and advocating for policy that utilizes technology to improve the community. 
  • Encode Justice Georgia: Encode Justice GA is the third Encode Justice to join EFA, mostly made up of high school students learning the tools of organizing by focusing on issues like algorithmic machine-learning and law enforcement surveillance. 

Alliance members are organizing for your rights: 

This year, we talked to the North Carolina chapter of Encode Justice, a network that includes over 1,000 high school and college students across over 40 U.S. states and 30 countries. A youth-led movement for safe, equitable AI, their mission is mobilizing communities for AI policies that are aligned with human values. The NC chapter is has led educational workshops, policy memos, and legislative campaigns on both the state and& city council level, while lobbying officials and building coalitions with other North Carolinians.

Local groups continued to take on fights to defend constitutional protections against police surveillance overreach around the country. We caught up with our friends at the Surveillance Technology Oversight Project (S.T.O.P.) in New York, which litigates and advocates for privacy, working to push back against local government mass surveillance. STOP worked to pass the Public Oversight of Surveillance Technology Act in the New York City Council and used the law to uncover previously unknown NYPD surveillance contracts. This year they made significant strides in their campaign to ‘Ban the Scan’ (face recognition) in both the state assembly and the city council.

Another heavy hitter in the alliance, Lucy Parsons Labs , took the private-sector Atlanta Police Foundation to court to seek transparency over its functions on behalf of law enforcement agencies, arguing that those functions should be open to the same public records requests as the government agencies they are being used for. 

Defending constitutional rights against encroachments by police agencies is an uphill battle, and our allies in San Diego’s TRUST Coalition were among those fighting to protect Community Control Over Police Surveillance requirements previously passed by their city council.

We checked-in with CCTV Cambridge on their efforts to address digital equity with their Digital Navigator program, as well as highlighting them for Digital Inclusion Week 2024. CCTV Cambridge does work across all demographics in their city. For example, they implemented a Youth Media Program where teens get paid while developing skills to become professional digital media artists. They also have a Foundational Technology program for the elderly and others who struggle with increasing demands of technology in their lives. 

This has been a productive year organizing for digital rights in the Pacific Northwest. We were able to catch up with several allies in Portland, Oregon, at the PDX People’s Digital Safety Fair on the campaign to bring high-speed broadband to their city, which is led by Community Broadband PDX and the Personal TelCo Project. With six active EFA members in Portland and three in neighboring Washington state, we awere excited to watch the growing momentum for digital rights in the region.  

Citizens Privacy Coalition crowdfunded a documentary on the impacts of surveillance in the Bay Area, called "Watch the Watchers." The film features EFF's Eva Galperin and addresses how to combat surveillance across policy, technological guardrails, and individual action. 

Allies brought knowledge to their neighbors: 

The Electronic Frontiers track at the sci-fi, fantasy, and comic book-oriented Dragon*Con in Atlanta celebrated its 25th anniversary, produced in coordination with EFA member Electronic Frontiers Georgia. The digital rights component to Dragon*Con had its largest number of panels yet on a wide variety of digital rights issues, from vehicle surveillance to clampdowns against first amendment-protected activities. Members of EF-Georgia, EFF, and allied organizations presented on a variety of topics, including:  

More of this year’s Dragon*Con panels can be found at EF-Georgia’s special Dragon*Con playlist. 

EFF-Austin led the alliance in recorded in-person events, with monthly expert talks in Texas and meet-ups for people in their city interested in privacy, security, and skills in tech. They worked with new EFA member Open Austin to talk about how Austinites can get involved through civic-minded technology efforts. Other discussions included: 

Forging ahead into 2025: 

In complicated times, the EFA team is committed to building bridges for local groups and activists, because we build power when we work together, whether to fight for our digital rights, or to educate our neighbors on the issues and the technologies they face. In the coming year, a lot of EFA members will be focused on defending communities that are under attack, spreading awareness about the role data and digital communications play in broader struggles, and cultivating skills and knowledge among their neighbors.

To learn more about how the EFA works, please check out our FAQ page, and apply to join us. 

Past EFA members profiles: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Growing Intersection of Reproductive Rights and Digital Rights: 2024 in Review

Par : Daly Barnett
27 décembre 2024 à 13:24

Dear reader of our blog, surely by now you know the format: as we approach the end of the year, we look back on our work, count our wins, learn from our misses, and lay the groundwork strategies for a better future. It's been an intense year in the fight for reproductive rights and its intersections with digital civil liberties. Going after cops illegally sharing location data, fighting the data broker industry, and building coalitions with the broader movement for reproductive justice—we've stayed busy.

The Fight Against Warrantless Access to Real-Time Location Tracking

The location data market is an unregulated nightmare industry that poses an existential threat to everyone's privacy, but especially those embroiled in the fight for reproductive rights. In a recent blog post, we wrote about the particular dangers posed by LocateX, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. cops shouldn't be able to buy their way around having to get a warrant for real-time location tracking of anyone they please, regardless of the context. In regressive states that ban abortion, however, the problems with LocateX illustrate just how severe the issue can be for such a large population of people.

Building Coalition Within Digital Civil Liberties and Reproductive Justice

Part of our work in this movement is recognizing our lane: providing digital security tips, promoting the rights to privacy and free expression, and making connections with community leaders to support and elevate their work. This year we hosted a livestream panel featuring various next-level thinkers and reproductive justice movement leaders. Make sure to watch it if you missed it! Recognizing and highlighting our shared struggles, interests, and avenues for liberation is exactly how movements are fought for and won.

The Struggle to Stop Cops from Illegally Sharing ALPR data

It's been a multi-year battle to stop law enforcement agencies from illegally sharing out-of-state ALPR (automatic license plate reader) data. Thankfully this year we were able to celebrate a win: a grand jury in Sacramento made the motion to investigate two police agencies who have been illegally sharing this type of data. We're glad to declare victory, but those two agencies are far from the only problem. We hope this sets a precedent that cops aren't above the law and will continue to fight for just that. This win will help us build momentum to continue this fight into the coming year.

Sharing What We Know About Digital Surveillance Risks

We'll be the first to tell you that expertise in digital surveillance threats always begins with We've learned a lot in the few years we've been researching the privacy and security risks facing this issue space, much of it gathered from conversation and trainings with on-the-ground movement workers. We gathered what we've learned from that work and distilled it into an accessible format for anyone that needs it. Behind the scenes, this research continues to inform the hands-on digital security trainings we provide to activists and movement workers.

As we proceed into an uncertain future where abortion access will continue to be a difficult struggle, we'll continue to do what we do best: standing vigilant for peoples' right to privacy, fighting bad Internet laws, protecting free speech online, and building coalition with others. Thank you for your support.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

You Can Be a Part of this Grassroots Movement 🧑‍💻

26 décembre 2024 à 11:54

You ever hear the saying, "it takes a village"? I never really understood the saying until I started going to conferences, attending protests, and working on EFF's membership team. 

You see, EFF's mission thrives because we are powered by people like you. Just as we fight for your privacy and free expression through grassroots advocacy, we rely on grassroots support to make all of our work possible. Will you join our movement with a small monthly donation today?

Start A Monthly Donation

Your Support Makes the Difference

Powered by your support, EFF never needs to appease outside interests. With over 75% of our funding coming from individual donations, your contribution ensures that we can keep fighting for your rights online—no matter what. So, whether you give $5/month or include us in your planned giving, you're ensuring our team can stand up for you.

Donate by December 31 to help unlock bonus grants! Your automatic monthly or annual donation—of any amount—will count towards our Year-End Challenge and ensures you're helping us win this challenge in future years, too.

🌟 BE A PART OF THE CHANGE 🌟

When you become a Sustaining Donor with a monthly or annual gift, you are not only supporting EFF's expert lawyers and activists, but you also get to choose free EFF member gear each year as a thank you from us. For just $10/month you can even choose our newest member t-shirt: Fix Copyright.

Grey dad hat with black EFF logo and EFF's Fix Copyright T-Shirt

Show your support for privacy and free speech with EFF member gear.

We'd love to have you beside us for every legislative victory we share, every spotlight we shine on bad actors, and every tool we develop to keep you safe online.

Start a monthly donation with EFF today and keep the digital freedom movement strong!

Join The Digital Rights Movement

Unlock Bonus Grants Before 2025

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Surveillance Self-Defense: 2024 in Review

26 décembre 2024 à 10:39

This year, we celebrated the 15th anniversary of our Surveillance-Self Defense (SSD) guide. How’d we celebrate? We kept at it—continuing to work on, refine, and update one of the longest running security and privacy guides on the internet.

Technology changes quickly enough as it is, but so does the language we use to describe that technology. In order for SSD to thrive, it needs careful attention throughout the year. So, we like to think of SSD as a garden, always in need of a little watering, maybe some trimming, and the occasional mowing down of dead technologies. 

Brushing Up on the Basics

A large chunk of SSD exists to explain concepts around digital security in the hopes that you can take that knowledge to make your own decisions about your specific needs. As we often say, security is a mindset, not a purchase. But in order to foster that mindset, you need some basic knowledge. This year, we set out to refine some of this guidance in the hopes of making it easier to read and useful for a variety of skill levels. The guides we updated included:

Big Guides, Big (and Small) Changes

If you’re looking for something a bit longer, then some of our more complicated guides are practically novels. This year, we updated a few of these.

We went through our Privacy Breakdown of Mobile Phones and updated it with more recent examples when applicable, and included additional tips at the end of some sections for actionable steps you can take. Phones continue to be one of the most privacy-invasive devices we own, and getting a handle on what they’re capable of is the first step to figuring out what risks you may face.

Our Attending a Protest guide is something we revisit every year (sometimes a couple times a year) to ensure it’s as accurate as possible. This year was no different, and while there were no sweeping changes, we did update the included PDF guide and added screenshots where applicable.

We also reworked our How to: Understand and Circumvent Network Censorship slightly to frame it more as instructional guidance, and included new features and tools to get around censorship, like utilizing a proxy in messaging tools.

New Guides

We saw two additions to the SSD this year. First up was How to: Detect Bluetooth Trackers, our guide to locating unwanted Bluetooth trackers—like Apple AirTags or Tile—that someone may use to track your location. Both Android and iOS have made changes to detecting these sorts of trackers, but the wide array of different products on the market means it doesn’t always work as expected.

We also put together a guide for the iPhone’s Lockdown Mode. While not a feature that everyone needs to consider, it has proven helpful in some cases, and knowing what those circumstances are is an important step in deciding if it’s a feature you need to enable.  

But How do I?

As the name suggests, our Tool Guides are all about learning how to best protect what you do on your devices. This might be setting up two-factor authentication, turning on encryption on your laptop, or setting up something like Apple’s Advanced Data Protection. These guides tend to need a yearly look to ensure they’re up-to-date. For example, Signal saw the launch of usernames, so we went in and made sure that was added to the guide. Here’s what we updated this year:

And Then There Were Blogs

Surveillance Self-Defense isn’t just a website, it’s also a general approach to privacy and security. To that end, we often use our blog to tackle more specific questions or respond to news.

This year, we talked about the risks you might face using your state’s digital driver’s license, and whether or not the promise of future convenience is worth the risks of today.

We dove into an attack method in VPNs called TunnelVision, which showed how it was possible for someone on a local network to intercept some VPN traffic. We’ve reiterated our advice here that VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

Location data privacy is still a major issue this year, with potential and horrific abuses of this data popping up in the news constantly. We showed how and why you should disable location sharing in apps that don’t need access to function.

As mentioned above, our SSD on protesting is a perennial always in need of pruning, but sometimes you need to plant a whole new flower, as was the case when we decided to write up tips for protesters on campuses around the United States.

Every year, we fight for more privacy and security, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review

For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe. 

The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content. 

For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.  

As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.  

The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.  

The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.  

Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem. 

As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?

Fairness Regulation: Ambition and High-Stakes Clashes 

There’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true. 

Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.  

Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European. 

In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise. 

The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent. 

In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again. 

Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.  

For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”

Platform regulation: A delicate balance 

For regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?  

EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.  

In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance! 

But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship. 

In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU. 

Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.  

The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Celebrating Digital Freedom with EFF Supporters: 2024 in Review

26 décembre 2024 à 10:33

“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.” It can be a tough job. A lot of our time is spent fighting bad things that are happening in the world or fixing things that have been broken for a long time.

But this work is important, and we've accomplished great things this year! Thanks to your help, we pushed the USPTO to withdraw harmful patent review proposals, fought for the public's right to access police drone footage, and continue to see more and more of the web encrypted thanks to Certbot and Let’s Encrypt.

Of course, the biggest reason EFF is able to fight for privacy and free expression online is support from EFF members. Public support is not only the reason we can operate but is also a great motivator to wake up and advocate for what’s right—especially when we get to hang out with some really cool folks! And with that, I’d like to reminisce.

EFF's Bay Area Festivities

Early in the year we held our annual Spring Members’ Speakeasy. We invited supporters in the Bay Area to join us at Babylon Burning, where all of EFF’s t-shirts, hoodies, and much of our swag are made. There, folks got a fun opportunity to hand print their own tote bag! It was a fun opportunity to see t-shirts that even I had never seen before. Side note, EFF has a lot of mechas on members’ t-shirts.

Vintage EFF t-shirts hung across the walls at Babylon Burning

Vintage EFF t-shirts hung across the walls at Babylon Burning.

The EFF team had a great time with EFF supporters at events throughout the year. Of course, my mind was blown seeing the questions EFF gamemasters (including the Cybertiger) came up with for both Tech Trivia and Cyberlaw Trivia. What was even more impressive was seeing how many answers teams got right at both events. During Cyberlaw Trivia, one team was able to recite 22 digits of pi, winning the tiebreaker question and the coveted first place prize!

Beating the Heat in Las Vegas

EFF staff with the Uber Contributor Award

EFF staff with the Uber Contributor Award.

Next, one of my favorite summer pastimes beating the heat in Las Vegas, where we get to see thousands of EFF supporters for the summer security conferences—BSidesLV, Black Hat, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week. The support EFF receives during the summer security conferences always amazes me, and it’s a joy to say hi to everyone that stops by to see us. We received an award from DEF CON and even speed ran a legal case, ensuring a security researchers' ability to give their talk at the conference.

While the lawyers were handling the legal case at DEF CON, a subgroup of us had a blast participating in the EFF Benefit Poker Tournament. Fourty-six supporters and friends played for money, glory, and the future of the web—all while using these new EFF playing cards! In the end, only one winner could beat the celebrity guests, including Cory Doctorow and Deviant (even winning the literal shirt off of Deviant's back).

EFFecting Change

This year we also launched a new livestream series: EFFecting Change. With our initial three events, we covered recent Supreme Court cases and how they affect the internet, keeping yourself safe when seeking reproductive care, and how to protest with privacy in mind. We’ve seen a lot of support for these events and are excited to continue them next year. Oh, and no worries if you missed one—they’re all recorded here!

Congrats to Our 2024 EFF Award Winners

We wanted to end the year in style, of course, with our annual EFF Awards. This year we gave awards to 404 Media, Carolina Botero, and Connecting Humanity—and you can watch the keynote if you missed it. We’re grateful to honor and lift up the important work of these award winners.

EFF staff and EFF Award Winners holding their trophies

EFF staff and EFF Award Winners holding their trophies.

And It's All Thanks to You

There was so much more to this year too. We shared campfire tales from digital freedom legends, the Encryptids; poked fun at bogus copyright law with our latest membership t-shirt; and hosted even more events throughout the country.

As 2025 approaches, it’s important to reflect on all the good work that we’ve done together in the past year. Yes, there’s a lot going on in the world, and times may be challenging, but with support from people like you, EFF is ready to keep up the fight—no matter what.

Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting For Progress On Patents: 2024 in Review

Par : Joe Mullin
25 décembre 2024 à 10:34

The rights we have in the offline world–to speak freely, create culture, play games, build new things and do business–must be available to us online, as well. This core belief drives EFF’s work to fight the misuse of the patent system. 

Despite significant progress we’ve made over the last decade, patents, and in particular vague software patents, remain a serious threat to online rights. The median patent lawsuit isn't filed by what Americans would recognize as an ‘inventor,’ but by an anonymous limited liability company that provides no products or services, and instead uses patents to threaten others over alleged infringement. In other words, a patent troll. In the tech sector, more than 85% of patent lawsuits are filed by these “non-practicing entities.” 

That’s why at EFF, we continue to  help individuals and organizations fight patent threats related to everyday activities like using CAPTCHAs and picture menus, tracking packages or vehiclesteaching languagesholding online contests, or playing simple games online

Here’s where the fight stands as we move into 2025. 

Defending the Public’s Right To Challenge Bad Patents

In 2012, recognizing the persistent problem of an overburdened patent office issuing a countless number dubious patents each year, Congress established a system called “inter partes reviews” (IPRs) to review and challenge patents. While far from perfect, IPRs have led to the cancellation of thousands of patents that should never have been granted in the first place. 

It’s no surprise that big patent owners and patent trolls have long sought to dismantle the IPR system. After unsuccessful attempts to persuade federal courts to dismantle IPRs, they shifted tactics in the past 18 months, attempting to convince the U.S. Patent and Trademark Office (USPTO) to undermine the IPR system by changing the rules on who can use it. 

EFF opposed these proposed changes, urging our supporters to file public comments. This effort was a resounding success. After reviewing thousands of comments, including nearly 1,000 inspired by EFF’s call to action, the USPTO withdrew its proposal

Stopping Congress From Re-Opening The Door To The Worst Patents 

The patent system, particularly in the realm of software, is broken. For more than 20 years, the U.S. Patent Office has issued patents on basic cultural or business practices, often with little more than the addition of computer jargon or trivial technical elements. 

The Supreme Court addressed this issue a decade ago with its landmark decision in a case called Alice v. CLS Bank, ruling that simply adding computer language to these otherwise generic patents isn’t enough to make them valid. However, Alice hasn’t fully protected us from patent trolls. Even with this decision, the cost of challenging a patent can run into hundreds of thousands of dollars, enabling patent trolls to make “nuisance” demands for amounts of $100,000 or less. But Alice has dampened the severity and frequency of patent troll claims, and allowed for many more businesses to fight back when needed. 

So we weren’t surprised when some large patent owners tried again this year to overturn Alice, with the introduction of the Patent Eligibility Restoration Act (PERA), which would bring the worst patents back into the system. PERA would also have overturned the Supreme Court ruling that prevents the patenting of human genes. EFF opposed PERA at every stage, and late this year, its supporters abandoned their efforts to pass it through the 118th Congress. We know they will try again next year–we’ll be ready. 

Shining Light On Secrecy In Patent Litigation

Litigation in the U.S is supposed to be transparent, particularly in patent cases involving technologies that impact millions of  internet users daily. Unfortunately, this is not always the case. In Entropic Communications LLC v. Charter Communications, filed in the U.S. District Court for the Eastern District of Texas, overbroad sealing of documents has obscured the case from public view. EFF intervened in the case to protect the public’s right to access federal court records, as the claims made by Entropic could have wide-reaching implications for anyone using cable modems to connect to the internet. 

Our work to ensure transparency in patent disputes is ongoing. In 2016, EFF intervened in another overly-sealed patent case in the Eastern District of Texas. In 2022, we did the same in California, securing an important transparency ruling. That same year, we supported a judge’s investigation into patent owners in Delaware, which ultimately resulted in referrals for criminal investigation. The judge’s actions were upheld on appeal this year. 

It remains far too easy for patent trolls to extort and exploit individuals and companies simply for creating or using software. In 2025, EFF will continue fighting for a patent system that’s open, fair, and transparent. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

We Stood Up for Access to the Law and Congress Listened: 2024 in Review

25 décembre 2024 à 10:34

For a while, ever since they lost in court, a number of industry giants have pushed a bill that purported to be about increasing access to the law. In fact, it would give them enormous power over the public ability to access, share, teach, and comment on the law.  

This sounds crazy—no one should be able to own the law. But these industry associations claim there’s a glaring exception to the rule: safety and building codes. The key distinction, they insist, is how these particular laws are developed. Often, when it comes to creating the best practices for an industry, a group of experts comes together to draft model standards. Many of those standards are then “incorporated by reference” into law, making them legal mandates just are surely as the U.S. tax code. 

But unlike most U.S. laws, the industry association that convene the experts claim that they own a copyright in the results, which means they get to control – and charge for—access to them. 

The consequences aren’t hard to imagine. If you are a journalist trying to figure out if a bridge that collapsed violated legal safety standards, you have to get the standards from the industry association, and pay for it. If you are renter who wants to know whether your apartment complies with the fire code, you face the same barrier.  And so on. 

Many organizations are working to remedy the situation, making standards available online for free (or, in some cases, for free but with a “premium” version that offers additional services on top). Courts around the country have affirmed their right to do so. 

Which brings us to the “Protecting and Enhancing Public Access to Codes Act” or “Pro Codes.” The Act requires industry associations to make standards incorporated by reference into law available for free to the public. But here’s the kicker – in exchange Congress will affirm that they have a legitimate copyright in those laws.    

This is bad deal for the public. First, access will mean read-only, and subject to licensing limits.  We already know what that looks like: currently the associations that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you can look at the law (as long as you aren’t print-disabled and you know exactly what to look for), but you can’t share it, compare it, or comment on it. That’s fundamentally against the public interest, as many have said. It gives private parties a windfall to do badly what others, like EFF client Public Resource, already do better and for free. 

Second, it’s solving a nonexistent problem. The many volunteers who develop these codes neither need nor want a copyright incentive. The industry associations don’t need it either—they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into law.   

Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the law.   

We’re pleased that members of Congress have recognized the many problems with this law. Many of you wrote to your members to raise concerns and when it was brought to a vote in committee, members registered those concerns. While it passed out of the House Judiciary Committee, the House of Representatives was asked to vote on the law “on suspension,” meaning it can avoid debate and become law if two-thirds of the House vote yes on it. In theory, it’s meant to make it easier to pass uncontroversial laws. 

Because you wrote in, because experts sent letters explaining the problems, enough members of Congress recognized that Pro Codes is not uncontroversial. It is not a small deal to allow industry giants to own parts of the law.  

This year, we are glad that so many people lent their time and energy to understanding the wolf in sheep’s clothing that the Pro Codes Act really was. And we hope that SDOs take note that they cannot pull the wool over everyone’s eyes. Not while we’re keeping watch.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Police Surveillance in San Francisco: 2024 in Review

25 décembre 2024 à 10:33

From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.

Unfortunately, 2024 was a far cry from those victories.

While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.

In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.

Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.  

EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review

24 décembre 2024 à 14:05

EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.

Removing Amazon Ring

We started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms. 

Adding third-party platforms

In 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and 

New Fusus data

404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.

New and updated ALPR data 

EFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data. 

In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer. 

Redoing drone data

This year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource. 

We also added new drone data from Illinois, Minnesota, and Texas

We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed. 

Other cool uses of the Atlas

The Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data. 

The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”

Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision. 


Thanks to our volunteers

The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.

The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review

As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.

All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.

We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.

Public Officials Censoring Comments on Government Social Media Pages

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED

The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.

The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court’s new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.

Government Mandates for Platforms to Carry Certain Online Speech

Cases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED  

The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.

In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.

Government Coercion in Social Media Content Moderation

Case: Murthy v. Missouri – DECIDED

The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.

Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.

Some Takeaways From These Three Sets of Cases

The O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.

Three key takeaways emerged from these three sets of cases (across five total cases):

First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.

Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.

Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Government-Mandated Age Verification

Case: Free Speech Coalition v. Paxton – PENDING

Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review

Par : Aaron Mackey
24 décembre 2024 à 13:51

People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.

Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables services to host our speech. EFF was there to fight back on behalf of all internet users.

Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone, including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.

EFF helps thwart effort to repeal Section 230

Members of Congress introduced a bill in May this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.

As we wrote at the time, repealing Section 230 would be a disaster for internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion, and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who remove or curate other users’ posts.

Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims that would be filed. Instead, repealing Section 230 would have solidified the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its dominance.

Thankfully, the bill did not advance, in part because internet users wrote to members of Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big Tech’s dominance without harming users’ online speech.

EFF stands up for users’ speech in courts

Congress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability to read other people’s posts and make use of basic features of online services that benefit all users.

In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to recommend certain content reflects its own speech and thus Section 230’s protections do not apply.

We filed a friend-of-the-court brief in support of TikTok’s request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court to review the case.

In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages, “Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s features.

We filed a friend-of-the-court brief in support of Snap’s effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before the trial court.

EFF supports effort to empower users to customize their online experiences

While lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.

That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a friend-of-the-court brief in support, arguing that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.

The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Defending Encryption in the U.S. and Abroad: 2024 in Review

Par : Joe Mullin
23 décembre 2024 à 11:05

EFF supporters get that strong encryption is tied to one of our most basic rights: the right to have a private conversation. In the digital world, privacy is impossible without strong encryption. 

That’s why we’ve always got an eye out for attacks on encryption. This year, we pushed back—successfully—against anti-encryption laws proposed in the U.S., the U.K. and the E.U. And we had a stark reminder of just how dangerous backdoor access to our communications can be. 

U.S. Bills Pushing Mass File-Scanning Fail To Advance

The U.S. Senate’s EARN IT Bill is a wrongheaded proposal that would push companies away from using encryption and towards scanning our messages and photos. There’s no reason to enact such a proposal, which technical experts agree would turn our phones into bugs in our pockets

We were disappointed when EARN IT was voted out of committee last year, even though several senators did make clear they wanted to see additional changes before they support the bill. Since then, however, the bill has gone nowhere. That’s because so many people, including more than 100,000 EFF supporters, have voiced their opposition. 

People increasingly understand that encryption is vital to our security and privacy. And when politicians demand that tech companies install dangerous scanning software whether users like it or not, it’s clear to us all that they are attacking encryption, no matter how much obfuscation takes place. 

EFF has long encouraged companies to adopt policies that support encryption, privacy and security by default. When companies do the right thing, EFF supporters will side with them. EFF and other privacy advocates pushed Meta for years to make end-to-end encryption the default option in Messenger. When Meta implemented the change, they were sued by Nevada’s Attorney General. EFF filed a brief in that case arguing that Meta should not be forced to make its systems less secure. 

UK Backs Off Encryption-Breaking Language 

In the U.K., we fought against the wrongheaded Online Safety Act, which included language that would have let the U.K. government strongarm companies away from using encryption. After pressure from EFF supporters and others, the U.K. government gave last-minute assurances that the bill wouldn’t be applied to encrypted messages. The U.K. agency in charge of implementing the Online Safety Act, Ofcom, has now said that the Act will not apply to end-to-end encrypted messages. That’s an important distinction, and we have urged Ofcom to make that even more clear in its written guidance. 

EU Residents Do Not Want “Chat Control” 

Some E.U. politicians have sought to advance a message-scanning bill that was even more extreme than the U.S. anti-encryption bills. We’re glad to say the EU proposal, which has been dubbed “Chat Control” by its opponents, has also been stalled because of strong opposition. 

Even though the European Parliament last year adopted a compromise proposal that would protect our rights to encrypted communications, a few key member states at the EU Council spent much of 2024 pushing forward the old, privacy-smashing version of Chat Control. But they haven’t advanced. In a public hearing earlier this month, 10 EU member states, including Germany and Poland, made clear they would not vote for this proposal. 

Courts in the E.U., like the public at large, increasingly recognize that online private communications are human rights, and the encryption required to facilitate them cannot be grabbed away. The European Court of Human Rights recognized this in a milestone judgment earlier this year, Podchasov v. Russia, which specifically held that weakening encryption put at risk the human rights of all internet users. 

A Powerful Reminder on Backdoors

All three of the above proposals are based on a flawed idea: that it’s possible to give some form of special access to peoples’ private data that will never be exploited by a bad actor. But that’s never been true–there is no backdoor that works only for the “good guys.” 

In October, the U.S. public learned about a major breach of telecom systems stemming from Salt Typhoon, a sophisticated Chinese-government backed hacking group. This hack infiltrated the same systems that major ISPs like Verizon, AT&T and Lumen Technologies had set up for U.S. law enforcement and intelligence agencies to get “lawful access” to user data. It’s still unknown how extensive the damage is from this hack, which included people under surveillance by U.S. agencies but went far beyond that. 

If there’s any upside to a terrible breach like Salt Typhoon, it’s that it is waking up some officials to understand that encryption is vital to both individual and national security. Earlier this month, a top U.S. cybersecurity chief said “encryption is your friend,” making a welcome break with the messaging we’ve seen over the years at EFF.  Unfortunately, other agencies, including the FBI, continue to push the idea that strong encryption can be coupled with easy access by law enforcement. 

Whatever happens, EFF will continue to stand up for our right to use encryption to have secure and private online communications.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

2024 Year in Review

Par : Cindy Cohn
23 décembre 2024 à 10:50

It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest that 2024 was a big year. 

If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise, practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they need a warrant.  

For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes over the public imagination, we will have recourse against its harms. 

But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council meetings online.  

With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new technologies, new proposed laws, and legal precedent.  

Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.  

We will update this page with new stories about digital rights in 2024 every day between now and the new year. 

Defending Encryption in the U.S. and Abroad
EFF in the Press
The U.S. Supreme Court Continues its Foray into Free Speech and Tech
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It
We Stood Up for Access to the Law and Congress Listened
Police Surveillance in San Francisco
Fighting For Progress On Patents
Celebrating Digital Freedom with EFF Supporters
Surveillance Self-Defense
EU Tech Regulation—Good Intentions, Unclear Consequences
The Growing Intersection of Reproductive Rights and Digital Rights
Electronic Frontier Alliance Fought and Taught Locally
Global Age Verification Measures
While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas
State Legislatures Are The Frontline for Tech Policy
Fighting Automated Oppression
Exposing Surveillance at the U.S.-Mexico Border
Federal Regulators Limit Location Brokers from Selling Your Whereabouts
Fighting Online ID Mandates
AI and Policing
Kids Online Safety Act Continues to Threaten Our Rights Online
Deepening Government Use of AI and E-Government Transition in Latin America
Decentralization Reaches a Turning Point

EFF Tells Appeals Court To Keep Copyright’s Fair Use Rules Broad And Flexible

Par : Joe Mullin
21 décembre 2024 à 12:05

It’s critical that copyright be balanced with limitations that support users’ rights, and perhaps no limitation is more important than fair use. Critics, humorists, artists, and activists all must have rights to re-use and re-purpose source material, even when it’s copyrighted. 

Yesterday, EFF weighed in on another case that could shape the future of our fair use rights. In Sedlik v. Von Drachenberg, a Los Angeles tattoo artist created a tattoo based on a well-known photograph of Miles Davis taken by photographer Jeffrey Sedlik. A jury found that Von Drachenberg, the tattoo artist, did not infringe the photographer’s copyright because her version was different from the photo; it didn’t meet the legal threshold of “substantially similar.” After the trial, the judge in the case considered other arguments brought by Sedlik after the trial and upheld the jury’s findings. 

On appeal, Sedlik has made arguments that, if upheld, could narrow fair use rights for everyone. The appeal brief suggests that only secondary users who make “targeted” use of a copyrighted work have strong fair use defenses, relying on an incorrect reading of the Supreme Court’s decision in Andy Warhol Foundation v. Goldsmith

Fair users select among various alternatives, for both aesthetic and practical reasons.

Such a reading would upend decades of Supreme Court precedent that makes it clear that “targeted” fair uses don’t get any special treatment as opposed to “untargeted” uses. As made clear in Warhol, the copying done by fair users must simply be “reasonably necessary” to achieve a new purpose. The principle of protecting new artistic expressions and new innovations is what led the Supreme Court to protect video cassette recording as fair use in 1984. It also contributed to the 2021 decision in Oracle v. Google, which held that Google’s copying of computer programming conventions created for desktop computers, in order to make it easier to design for modern smartphones, was a type of fair use. 

Sedlik argues that if a secondary user could have chosen another work, this means they did not “target” the original work, and thus the user should have a lessened fair use case. But that has never been the rule. As the Supreme Court explained, Warhol could have created art about a product other than Campbell’s Soup; but his choice to copy the famous Campbell’s logo was fully justified because it was “well known to the public, designed to be reproduced, and a symbol of an everyday item for mass consumption.” 

Fair users always select among various alternatives, for both aesthetic and practical reasons. A film professor might know of several films that expertly demonstrate a technique, but will inevitably choose just one to show in class. A news program alerting viewers to developing events may have access to many recordings of the event from different sources, but will choose just one, or a few, based on editorial judgments. Software developers must make decisions about which existing software to analyze or to interoperate with in order to build on existing technology. 

The idea of penalizing these non-“targeted” fair uses would lead to absurd results, and we urge the 9th Circuit to reject this argument. 

Finally, Sedlik also argues that the tattoo artist’s social media posts are necessarily “commercial” acts, which would push the tattoo art further away from fair use. Artists’ use of social media to document their processes and work has become ubiquitous, and such an expansive view of commerciality would render the concept meaningless. That’s why multiple appellate courts have already rejected such a view; the 9th Circuit should do so as well. 

In order for innovation and free expression to flourish in the digital age, fair use must remain a flexible rule that allows for diverse purposes and uses. 

Further Reading: 

  • EFF Amicus Brief in Sedlik v. Von Drachenberg 

Ninth Circuit Gets It: Interoperability Isn’t an Automatic First Step to Liability

20 décembre 2024 à 14:26

A federal appeals court just gave software developers, and users, an early holiday present, holding that software updates aren’t necessarily “derivative,” for purposes of copyright law, just because they are designed to interoperate the software they update.

This sounds kind of obscure, so let’s cut through the legalese. Lots of developers build software designed to interoperate with preexisting works. This kind of interoperability is crucial to innovation, particularly in a world where a small number of companies control so many essential tools and platforms. If users want to be able to repair, improve, and secure their devices, they must be able to rely on third parties to help. Trouble is, Big Tech companies want to be able to control (and charge for) every possible use of the devices and software they “sell” you – and they won’t hesitate to use the law to enforce that control. 

Courts shouldn’t assist, but unfortunately a federal district court did just that in the latest iteration of Oracle v. Rimini. Rimini provides support to improve the use and security of Oracle products, so customers don’t have to depend entirely on Oracle itself . Oracle doesn’t want this kind of competition, so it sued Rimini for copyright infringement, arguing that a software update Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though the update didn’t use any of Oracle’s copyrightable code. Derivative works are typically things like a movie based on a novel, or a translation of that novel. Here, the only “derivative” aspect was that Rimini’s code was designed to interact with Oracle’s code.  
 
Unfortunately, the district court initially sided with Oracle, setting a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is substantially similar to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those innovations.  

Rimini appealed to the Ninth Circuit, on multiple grounds. EFF, along with a diverse group of stakeholders representing consumers, small businesses, software developers, security researchers, and the independent repair community, filed an amicus brief in support explaining that the district court ruling on interoperability was not just bad policy, but also bad law.  

 The Ninth Circuit agreed: 

In effect, the district court adopted an “interoperability” test for derivative works—if a product can only interoperate with a preexisting copyrighted work, then it must be derivative. But neither the text of the Copyright Act nor our precedent supports this interoperability test for derivative works. 

 The court goes on to give a primer on the legal definition of derivative work, but the key point is this: a work is only derivative if it “substantially incorporates the other work.”

Copyright already reaches far too broadly, giving rightsholders extraordinary power over how we use everything from music to phones to televisions. This holiday season, we’re raising a glass to the judges who sensibly reined that power in. 

Customs & Border Protection Fails Baseline Privacy Requirements for Surveillance Technology

Par : Dave Maass
20 décembre 2024 à 13:47

U.S. Customs and Border Protection (CBP) has failed to address six out of six main privacy protections for three of its border surveillance programs—surveillance towers, aerostats, and unattended ground sensors—according to a new assessment by the Government Accountability Office (GAO).

In the report, GAO compared the policies for these technologies against six of the key Fair Information Practice Principles that agencies are supposed to use when evaluating systems and processes that may impact privacy, as dictated by both Office of Management and Budget guidance and the Department of Homeland Security's own rules.

A chart of the various technologies and how they comply with FIPS

These include:

  • Data collection. "DHS should collect only PII [Personally Identifiable Information] that is directly relevant and necessary to accomplish the specified purpose(s)."
  • Purpose specification. "DHS should specifically articulate the purpose(s) for which the PII is intended to be used."
  • Information sharing. "Sharing PII outside the department should be for a purpose compatible with the purpose for which the information was collected."
  • Data security. "DHS should protect PII through appropriate security safeguards against risks such as loss, unauthorized access or use, destruction, modification, or unintended or inappropriate disclosure."
  • Data retention. "DHS should only retain PII for as long as is necessary to fulfill the specified purpose(s)."
  • Accountability. "DHS should be accountable for complying with these principles, including by auditing the actual use of PII to demonstrate compliance with these principles and all applicable privacy protection requirements."

These baseline privacy elements for the three border surveillance technologies were not addressed in any "technology policies, standard operating procedures, directives, or other documents that direct a user in how they are to use a Technology," according to GAO's review.

CBP operates hundreds of surveillance towers along both the northern and southern borders, some of which are capable of capturing video more than seven miles away. The agency has six large aerostats (essentially tethered blimps) that use radar along the southern border, with others stationed in the Florida Keys and Puerto Rico. The agency also operates a series of smaller aerostats that stream video in the Rio Grande Valley of Texas, with the newest one installed this fall in southeastern New Mexico. And the report notes deficiencies with CBP's linear ground detection system, a network of seismic sensors and cameras that are triggered by movement or footsteps.

The GAO report underlines EFF's concerns that the privacy of people who live and work in the borderlands is violated when federal agencies deploy militarized, high-tech programs to confront unauthorized border crossings. The rights of border communities are too often treated as acceptable collateral damage in pursuit of border security.

CBP defended its practices by saying that it does, to some extent, address FIPS in its Privacy Impact Assessments, documents written for public consumption. GAO rejected this claim, saying that these assessments are not adequate in instructing agency staff on how to protect privacy when deploying the technologies and using the data that has been collected.

In its recommendations, the GAO calls on the CBP Commissioner to "require each detection, observation, and monitoring technology policy to address the privacy protections in the Fair Information Practice Principles." But EFF calls on Congress to hold CBP to account and stop approving massive spending on border security technologies that the agency continues to operate irresponsibly.

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Saving the Internet in Europe: Defending Free Expression

19 décembre 2024 à 13:26

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

We're Creating a Better Future for the Internet 🧑‍🏭

Par : Aaron Jue
19 décembre 2024 à 12:20

In the early years of the internet, website administrators had to face off with a burdensome and expensive process to deploy SSL certificates. But today, hundreds of thousands of people have used EFF’s free Certbot tool to spread that sweet HTTPS across the web. Now almost all internet traffic is encrypted, and everyone gets a basic level of security. Small actions mean big change when we act together. Will you support important work like this and give EFF a Year-End Challenge boost?

Give Today

Unlock Bonus Grants Before 2025

Make a donation of ANY SIZE by December 31 and you’ll help us unlock bonus grants! Every supporter gets us closer to a series of seven Year-End Challenge milestones set by EFF’s board of directors. These grants become larger as the number of online rights supporters grows. Everyone counts! See our progress.

🚧 Digital Rights: Under Construction 🚧

Since 1990, EFF has defended your digital privacy and free speech rights in the courts, through activism, and by making open source privacy tools. This team is committed to watching out for the users no matter what directions technological innovation may take us. And that’s funded entirely by donations.

Show your support for digital rights with free EFF member gear.

With help from people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; push the USPTO to withdraw harmful patent proposals; fight for the public's right to access police drone footage; and show why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet safety.

As technology’s reach continues to expand, so do everyone’s concerns about harmful side effects. That’s where EFF’s ample experience in tech policy, the law, and human rights shines. You can help us.

Donate to defend digital rights today and you’ll help us unlock bonus grants before the year ends.

Join EFF!

Proudly Member-Supported Since 1990

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

There’s No Copyright Exception to First Amendment Protections for Anonymous Speech

19 décembre 2024 à 11:22

Some people just can’t take a hint. Today’s perfect example is a group of independent movie distributors that have repeatedly tried, and failed, to force Reddit to give up the IP addresses of several users who posted about downloading movies. 

The distributors claim they need this information to support their copyright claims against internet service provider Frontier Communications, because it might be evidence that Frontier wasn’t enforcing its repeat infringer policy and therefore couldn’t claim safe harbor protections under the Digital Millennium. Copyright Act. Courts have repeatedly refused to enforce these subpoenas, recognizing the distributors couldn’t pass the test the First Amendment requires prior to unmasking anonymous speakers.  

Here's the twist: after the magistrate judge in this case applied this standard and quashed the subpoena, the movie distributors sought review from the district court judge assigned to the case. The second judge also denied discovery as unduly burdensome but, in a hearing on the matter, also said there was no First Amendment issue because the users were talking about copyright infringement. In their subsequent appeal to the Ninth Circuit, the distributors invite the appellate court to endorse the judge’s statement. 

As we explain in an amicus brief supporting Reddit, the court should refuse that invitation. Discussions about illegal activity clearly are protected speech. Indeed, the Supreme Court recently affirmed that even “advocacy of illegal acts” is “within the First Amendment’s core.” In fact, protecting such speech is a central purpose of the First Amendment because it ensures that people can robustly debate civil and criminal laws and advocate for change. 

There is no reason to imagine that this bedrock principle doesn’t apply just because the speech concerns copyright infringementespecially where the speakers aren’t even defendants in the case, but independent third parties. And unmasking Does in copyright cases carries particular risks given the long history of copyright claims being used as an excuse to take down lawful as well as infringing content online. 

We’re glad to see Reddit fighting back against these improper subpoenas, and proud to stand with the company as it stands up for its users. 

UK Politicians Join Organizations in Calling for Immediate Release of Alaa Abd El-Fattah

19 décembre 2024 à 07:06

As the UK’s Prime Minister Keir Starmer and Foreign Secretary David Lammy have failed to secure the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah, UK politicians call for tougher measures to secure Alaa’s immediate return to the UK.

During a debate on detained British nationals abroad in early December, chairwoman of the Commons Foreign Affairs Committee Emily Thornberry asked the House of Commons why the UK has continued to organize industry delegations to Cairo while “the Egyptian government have one of our citizens—Alaa Abd El-Fattah—wrongfully held in prison without consular access.”

In the same debate, Labour MP John McDonnell urged the introduction of a “moratorium on any new trade agreements with Egypt until Alaa is free,” which was supported by other politicians. Liberal Democrat MP Calum Miller also highlighted words from Alaa, who told his mother during a recent prison visit that he had “hope in David Lammy, but I just can’t believe nothing is happening...Now I think either I will die in here, or if my mother dies I will hold him to account.”

Alaa’s mother, mathematician Laila Soueif, has been on hunger strike for 79 days while she and the rest of his family have worked to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs.

Support for Alaa has come from many directions. On December 6, 12 Nobel laureates wrote to Keir Starmer urging him to secure Alaa’s immediate release “Not only because Alaa is a British citizen, but to reanimate the commitment to intellectual sanctuary that made Britain a home for bold thinkers and visionaries for centuries.” The pressure on Labour’s senior politicians has continued throughout the month, with more than 100 MPs and peers writing to David Lammy on December 15 demanding Alaa’ be freed.   

Alaa should have been released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

David Lammy met with Alaa’s family in November and promised to take action. But the UK’s Prime Minister failed to raise the case at the G20 Summit in Brazil when he met with Egypt’s President El-Sisi. 

If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release:

  1. Write to your MP (external link): https://freealaa.net/message-mp 
  2. Join Laila Soueif outside the Foreign Office daily between 10-11am
  3. Share Alaa’s plight on social media using the hashtag #freealaa

The UK Prime Minister and Foreign Secretary’s inaction is unacceptable. Every second counts, and time is running out. The government must do everything it can to ensure Alaa’s immediate and unconditional release.

❌
❌