Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Decentralization Reaches a Turning Point: 2024 in review

The steady rise of decentralized networks this year is transforming social media.  Platforms like Mastodon, Bluesky, and Threads are still in their infancy but have already shown that when users are given options, innovation thrives and it results in better tools and protections for our rights online. By moving towards a digital landscape that can’t be monopolized by one big player, we also see broader improvements to network resiliency and user autonomy.

The Steady Rise of Decentralized Networks

Fediverse and Threads

The Fediverse, a wide variety of sites and services most associated with Mastodon, continued to evolve this year. Meta’s Threads began integrating with the network, marking a groundbreaking shift for the company. Only a few years ago EFF dreamed of the impact an embrace of interoperability would have for a company that is notorious for building walled gardens that trap users within its platforms. By allowing Threads users to share their posts with Mastodon and the broader fediverse (and therefore, Bluesky) without leaving their home platform, Meta is introducing millions to the benefits of interoperability. We look forward to this continued trajectory, and for a day when it is easy to move to or from Threads, and still follow and interact with the same federated community. 

Threads’ enormous user base—100 million daily active users—now dwarfs both Mastodon and Bluesky. Its integration into more open networks is a potential turning point in popularizing the decentralized social web. However, Meta’s poor reputation on privacy, moderation, and censorship, drove many Fediverse instances to preemptively block Threads, and may fragment the network..

We explored how Threads stacks up against Mastodon and Bluesky, across moderation, user autonomy, and privacy. This development highlights the promise of decentralization, but it also serves as a reminder that corporate giants may still wield outsized influence over ostensibly open systems.

Bluesky’s Explosive Growth

While Threads dominated in sheer numbers, Bluesky was this year’s breakout star. At the start of the year, Bluesky had fewer than 200,000 users and was still invite-only.  In the last few months of 2024 however the project experienced over 500% growth in just one month, and ultimately reached over 25 million users. 

Unlike Mastodon, which integrates into the Fediverse, Bluesky took a different path, building its own decentralized protocol (AT Protocol) to ensure user data and identities remain portable and users retain a “credible exit.” This innovation allows users to carry their online communities across platforms seamlessly, sparing them the frustration of rebuilding their community. Unlike the Fediverse, Bluesky has prioritized building a drop-in replacement for Twitter, and is still mostly centralized. Bluesky has a growing arsenal of tools available to users, embracing community creativity and innovation. 

While Bluesky will be mostly familiar to former Twitter users, we ran through some tips for managing your Bluesky feed, and answered some questions for people just joining the platform.

Competition Matters

Keeping the Internet Weird

The rise of decentralized platforms underscores the critical importance of competition in driving innovation. Platforms like Mastodon and Bluesky thrive because they fill gaps left by corporate giants, and encourage users to find experiences which work best for them. The traditional social media model puts up barriers so platforms can impose restrictive policies and prioritize profit over user experience. When the focus shifts to competition and a lack of central control, the internet flourishes.

Whether a user wants the community focus of Mastodon, the global megaphone of Bluesky, or something else entirely, smaller platforms let people build experiences independent of the motives of larger companies. Decentralized platforms are ultimately most accountable to their users, not advertisers or shareholders.

Making Tech Resilient

This year highlighted the dangers of concentrating too much power in the hands of a few dominant companies. A major global IT outage this summer starkly demonstrated the fragility of digital monocultures, where a single point of failure can disrupt entire industries. These failures underscore the importance of decentralization, where networks are designed to distribute risk, ensuring that no single system compromise can ripple across the globe.  

Decentralized projects like Meshtastic, which uses radio waves to provide internet connectivity in disaster scenarios, exemplify the kind of resilient infrastructure we need. However, even these innovations face threats from private interests. This year, a proposal from NextNav to claim the 900 MHz band for its own use put Meshtastic’s experimentation—and by extension, the broader potential of decentralized communication—at risk. As we discussed in our FCC comments, such moves illustrate how monopolistic power not only stifles competition but also jeopardizes resilient tools that could safeguard peoples' connectivity. 

Looking Ahead

This year saw meaningful strides toward building a decentralized, creative, and resilient internet for 2025. Interoperability and decentralization will likely continue to expand. As it does, EFF will be vigilant, watching for threats to decentralized projects and obstacles to the growth of open ecosystems.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Deepening Government Use of AI and E-Government Transition in Latin America: 2024 in Review

Policies aimed at fostering digital government processes are gaining traction in Latin America, at local and regional levels. While these initiatives can streamline access to public services, it can also make them less accessible, less clear, and put people's fundamental rights at risk. As we move forward, we must emphasize transparency and privacy guarantees during government digital transition processes.

Regional Approach to Digitalization 

In November, the Ninth Ministerial Conference on the Information Society in Latin America and the Caribbean approved the 2026 Digital Agenda for the region (eLAC 2026). This initiative unfolds within the UN Economic Commission for Latin America and the Caribbean (ECLAC), a regional cooperation forum focused on furthering the economic development of LAC countries.

One of the thematic pillars of eLAC 2026 is the digital transformation of the State, including the digitalization of government processes and services to improve efficiency, transparency, citizen participation, and accountability. The digital agenda also aims to improve digital identity systems to facilitate access to public services and promote cross-border digital services in a framework of regional integration. In this context, the agenda points out countries’ willingness to implement policies that foster information-sharing, ensuring privacy, security, and interoperability in government digital systems, with the goal of using and harnessing data for decision-making, policy design and governance.

This regional process reflects and feeds country-level initiatives that have also gained steam in Latin America in the last few years. The incentives to government digital transformation take shape against the backdrop of improving government efficiency. It is critical to qualify what efficiency means in practice. Often “efficiency” has meant budget cuts or shrinking access to public processes and benefits at the expense of fundamental rights. The promotion of fundamental rights should guide a State’s metrics as to what is efficient and successful. 

As such, while digitalization can play an important role in streamlining access to public services and facilitating the enjoyment of rights, it can also make it more complex for people to access these same services and generally interact with the State. The most vulnerable are those in greater need for that interaction to work well and those with an unusual context that often is not accommodated by the technology being used. They are also the population most prone to having scarce access to digital technologies and limited digital skills.

In addition, whereas properly integrating digital technologies into government processes and routines carries the potential to enhance transparency and civic participation, this is not a guaranteed outcome. It requires government willingness and policies oriented to these goals. Otherwise, digitalization can turn into an additional layer of complexity and distance between citizens and the State. Improving transparency and participation involves conceiving people not only as users of government services, but as participants in the design and implementation of public polices, which includes the ones related to States’ digital transition.  

Leveraging digital identity and data-interoperability systems are generally treated as a natural part of government digitalization plans. Yet, they should be taken with care. As we have highlighted, effective and robust data privacy safeguards do not necessarily come along with states’ investments in implementing these systems, despite the fact they can be expanded into a potential regime of unprecedented data tracking. Among other recommendations and redlines, it’s crucial to support each person’s right to choose to continue using physical documentation instead of going digital.

This set of concerns stresses the importance of having an underlying institutional and normative structure to uphold fundamental rights within digital transition processes. Such a structure involves solid transparency and data privacy guarantees backed by equipped and empowered oversight authorities. Still, States often neglect the crucial role of that combination. In 2024, Mexico brought us a notorious example of that. Right when the new Mexican government has taken steps to advance the country’s digital transformation, it has also moved forward to close key independent oversight authorities, like the National Institute for Transparency, Access to Information and Personal Data Protection (INAI).

Digitalization and Government Use of Algorithmic Systems for Rights-Affecting Purposes

AI strategies approved in different Latin American countries show how fostering government use of AI is an important lever to AI national plans and a component of government digitalization processes.

In October 2024, Costa Rica was the first Central American country to launch an AI strategy. One of the strategic axes, named as "Smart Government", focuses on promoting the use of AI in the public sector. The document highlights that by incorporating emerging technologies in public administration, it will be possible to optimize decision making and automate bureaucratic tasks. It also envisions the provision of personalized services to citizens, according to their specific needs. This process includes not only the automation of public services, but also the creation of smart platforms to allow a more direct interaction between citizens and government.

In turn, Brazil has updated its AI strategy and published in July the AI Plan 2024-2028. One of the axes focuses on the use of AI to improve public services. The Brazilian plan also indicates the personalization of public services by offering citizens content that is contextual, targeted, and proactive. It involves state data infrastructures and the implementation of data interoperability among government institutions. Some of the AI-based projects proposed in the plan include developing early detection of neurodegenerative diseases and a "predict and protect" system to assess the school or university trajectory of students.

Each of these actions may have potential benefits, but also come with major challenges and risks to human rights. These involve the massive amount of personal data, including sensitive data, that those systems may process and cross-reference to provide personalized services, potential biases and disproportionate data processing in risk assessment systems, as well as incentives towards a problematic assumption that automation can replace human-to-human interaction between governments and their population. Choices about how to collect data and which technologies to adopt are ultimately political, although they are generally treated as technical and distant from political discussion. 

An important basic step relates to government transparency about the AI systems either in use by public institutions or part of pilot programs. Transparency that, at a minimum, should range from actively informing people that these systems exist, with critical details on their design and operation, to qualified information and indicators about their results and impacts. 

Despite the increasing adoption of algorithmic systems by public bodies in Latin America (for instance, a 2023's research mapped 113 government ADM systems in use in Colombia), robust transparency initiatives are only in its infancy. Chile stands out in that regard with its repository of public algorithms, while Brazil launched the Brazilian AI Observatory (OBIA) in 2024. Similar to the regional ILIA (Latin American Artificial Intelligence Index), OBIA features meaningful data to measure the state of adoption and development of AI systems in Brazil but still doesn't contain detailed information about AI-based systems in use by government entities.

The most challenging and controversial application from a human-rights and accountability standpoint is government use of AI in security-related activities.

Government surveillance and emerging technologies

During 2024, Argentina's new administration, under President Javier Milei, passed a set of acts regulating its police forces' cyber and AI surveillance capacities. One of them, issued in May, stipulates how police forces must conduct "cyberpatrolling", or Open-Source Intelligence (OSINT), for preventing crimes. OSINT activities do not necessarily entail the use of AI, but have increasingly integrated AI models as they facilitate the analysis of huge amounts of data. While OSINT has important and legitimate uses, including for investigative journalism, its application for government surveillance purposes has raised many concerns and led to abuses

Another regulation issued in July created the "Unit of Artificial Intelligence Applied to Security" (UIAAS). The powers of the new agency include “patrolling open social networks, applications and Internet sites” as well as “using machine learning algorithms to analyze historical crime data and thus predict future crimes”. Civil society organizations in Argentina, such as Observatorio de Derecho Informático Argentino, Fundación Vía Libre, and Access Now, have gone to courts to enforce their right to access information about the new unit created.

The persistent opacity and lack of effective remedies to abuses in government use of digital surveillance technologies in the region called action from the Special Rapporteur for Freedom of Expression of the Inter-American Commission on Human Rights (IACHR). The Office of the Special Rapporteur carried out a consultation to receive inputs about digital-powered surveillance abuses, the state of digital surveillance legislation, the reach of the private surveillance market in the region, transparency and accountability challenges, as well as gaps and best-practice recommendations. EFF has joined expert interviews and submitted comments in the consultation process. The final report will be published next year with important analysis and recommendations.

Moving Forward: Building on Inter-American Human Rights Standards for a Proper Government Use of AI in Latin America

Considering this broader context of challenges, we launched a comprehensive report on the application of Inter-American Human Rights Standards to government use of algorithmic systems for rights-based determinations. Delving into Inter-American Court decisions and IACHR reports, we provide guidance on what state institutions must consider when assessing whether and how to deploy AI and ADM systems for determinations potentially affecting people's rights.

We detailed what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explained why this adoption must meet necessary and proportionate principles, and what this entails. We highlighted what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment. We elaborated on human rights implications building off key rights enshrined in the American Convention on Human Rights and the Protocol of San Salvador, setting up an operational framework for their due application.

Based on the report, we have connected to oversight institutions, joining trainings for public prosecutors in Mexico and strengthening ties with the Public Defender's Office in the state of São Paulo, Brazil. Our goal is to provide inputs for their adequate adoption of AI/ADM systems and for fulfilling their role as public interest entities regarding government use of algorithmic systems more broadly. 

Enhancing public oversight of state deployment of rights-affecting technologies in a context of marked government digitalization is essential for democratic policy making and human-rights aligned government action. Civil society also plays a critical role, and we will keep working to raise awareness about potential impacts, pushing for rights to be fortified, not eroded, throughout the way.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Kids Online Safety Act Continues to Threaten Our Rights Online: 2024 in Review

At times this year, it seemed that Congress was going to give up its duty to protect our rights online—particularly when the Senate passed the dangerous Kids Online Safety Act (KOSA) by a large majority in July. But this legislation, which would chill protected speech and almost certainly result in privacy-invasive age verification requirements for many users to access social media sites, did not pass the House this year, thanks to strong opposition from EFF supporters and others.  

KOSA, first introduced in 2022, would allow the Federal Trade Commission to sue apps and websites that don’t take measures to restrict young people’s access to content. Congress introduced a number of versions of the bill this year, and we analyzed each of them. Unfortunately, the threat of this legislation still looms over us as we head into 2025, especially now that the bill has passed the Senate. And just a few weeks ago, its authors introduced an amended version to respond to criticisms from some House members.  

Despite its many amendments in 2024, we continue to oppose KOSA. No matter which version becomes final, the bill will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms it identifies.   

Here’s how, and why, we worked to stop KOSA this year, and where the fight stands now.  

New Versions, Same Problems

The biggest problem with KOSA is in its vague “duty of care” requirements. Imposing a duty of care on a broad swath of online services, and requiring them to mitigate specific harms based on the content of online speech, will result in those services imposing age verification and content restrictions. We’ve been critical of KOSA for this reason since it was introduced in 2022. 

In February, KOSA's authors in the Senate released an amended version of the bill, in part as a response to criticisms from EFF and other groups. The updates changed how KOSA regulates design elements of online services and removed some enforcement mechanisms, but didn’t significantly change the duty of care, or the bill’s main effects. The updated version of KOSA would still create a censorship regime that would harm a large number of minors who have First Amendment rights to access lawful speech online, and force users of all ages to verify their identities to access that same speech, as we wrote at the time.  KOSA’s requirements are comparable to cases in which the government tried to prevent booksellers from disseminating certain books; those attempts were found unconstitutional  

Kids Speak Out

The young people who KOSA supporters claim they’re trying to help have spoken up about the bill. In March, we published the results of a survey of young people who gave detailed reasons for their opposition to the bill. Thousands told us how beneficial access to social media platforms has been for them, and why they feared KOSA’s censorship. Too often we’re not hearing from minors in these debates at allbut we should be, because they will be most heavily impacted if KOSA becomes law.  

Young people told us that KOSA would negatively impact their artistic education, their ability to find community online, their opportunity for self-discovery, and the ways that they learn accurate news and other information. To sample just a few of the comments: Alan, a fifteen-year old, wrote,  

I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for!  

More Recent Changes To KOSA Haven’t Made It Better 

In May, the U.S. House introduced a companion version to the Senate bill. This House version modified the bill around the edges, but failed to resolve its fundamental censorship problems. The primary difference in the House version was to create tiers that change how the law would apply to a company, depending on its size.  

These are insignificant changes, given that most online speech happens on just a handful of the biggest platforms. Those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care and would be held to the strictest knowledge standard. 

The other major shift was to update the definition of “compulsive usage” by suggesting it be linked to the Diagnostic and Statistical Manual of Mental Disorders, or DSM. But simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. 

KOSA Passes the Senate

KOSA passed through the Senate in July, though legislators on both sides of the aisle remain critical of the bill.  

A version of KOSA introduced in September, tinkered with the bill again but did not change the censorship requirements. This version replaced language about anxiety and depression with a requirement that apps and websites prevent “serious emotional disturbance.”  

In December, the Senate released yet another version of the bill—this one written with the assistance of X CEO Linda Yaccarino. This version includes a throwaway line about protecting the viewpoint of users as long as those viewpoints are “protected by the First Amendment to the Constitution of the United States.” But user viewpoints were never threatened by KOSA; rather, the bill has always meant to threaten the hosts of the user speech—and it still does.  =

KOSA would allow the FTC to exert control over online speech, and there’s no reason to think the incoming FTC won’t use that power. The nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has promised to protect free speech by “fighting back against the trans agenda,” among other things. KOSA would give the FTC under this or any future administration wide berth to decide what sort of content should be restricted because they view it as harmful to kids. And even if it’s never even enforced, just passing KOSA would likely result in platforms taking down protected speech.  

If KOSA passes, we’re also concerned that it would lead to mandatory age verification on apps and websites. Such requirements have their own serious privacy problems; you can read more about our efforts this year to oppose mandatory online ID in the U.S. and internationally.   

EFF thanks our supporters, who have sent nearly 50,000 messages to Congress on this topic, for helping us oppose KOSA this year. In 2025, we will continue to rally to protect privacy and free speech online.   

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

AI and Policing: 2024 in Review

There’s no part of your life now where you can avoid the onslaught of “artificial intelligence.” Whether you’re trying to search for a recipe and sifting through AI-made summaries or listening to your cousin talk about how they’ve fired their doctor and replaced them with a chatbot, it seems now, more than ever, that AI is the solution to every problem. But, in the meantime, some people are getting hideously rich by convincing people with money and influence that they must integrate AI into their business or operations.

Enter law enforcement.

When many tech vendors see police, they see dollar signs. Law enforcement’s got deep pockets. They are under political pressure to address crime. They are eager to find that one magic bullet that finally might do away with crime for good. All of this combines to make them a perfect customer for whatever way technology companies can package machine-learning algorithms that sift through historical data in order to do recognition, analytics, or predictions.

AI in policing can take many forms that we can trace back decades–including various forms of face recognition, predictive policing, data analytics, automated gunshot recognition, etc. But this year has seen the rise of a new and troublesome development in the integration between policing and artificial intelligence: AI-generated police reports.

Egged on by companies like Truleo and Axon, there is a rapidly-growing market for vendors that use a large language model to write police reports for officers. In the case of Axon, this is done by using the audio from police body-worn cameras to create narrative reports with minimal officer input except for a prompt to add a few details here and there.

We wrote about what can go wrong when towns start letting their police write reports using AI. First and foremost, no matter how many boxes police check to say they are responsible for the content of the report, when cross examination reveals lies in a police report, officers will now have the veneer of plausible deniability by saying, “the AI wrote that part.” After all, we’ve all heard of AI hallucinations at this point, right? And don’t we all just click through terms of service without reading it carefully?

And there are so many more questions we have. Translation is an art, not a science, so how and why will this AI understand and depict things like physical conflict or important rhetorical tools of policing like the phrases, “stop resisting” and “drop the weapon,” even if a person is unarmed or is not resisting? How well does it understand sarcasm? Slang? Regional dialect? Languages other than English? Even if not explicitly made to handle these situations, if left to their own devices, officers will use it for any and all reports.

Prosecutors in Washington have even asked police not to use AI to write police reports (for now) out of fear that errors might jeopardize trials.

Countless movies and TV shows have depicted police hating paperwork and if these pop culture representations are any indicator, we should expect this technology to spread rapidly in 2025. That’s why EFF is monitoring its spread closely and providing more information as we continue to learn more about how it’s being used. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting Online ID Mandates: 2024 In Review

This year, nearly half of U.S. states passed laws imposing age verification requirements on online platforms. EFF has opposed these efforts because they censor the internet and burden access to online speech. Though age verification mandates are often touted as “online safety” measures for kids, the laws actually do more harm than good. They undermine the fundamental speech rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security.

Age verification bills generally require online services to verify all users’ ages—often through invasive tools like ID checks, biometric scans, and other dubious “age estimation” methods—before granting them access to certain online content or services. Some state bills mandate the age verification explicitly, including Texas’s H.B. 1181, Florida’s H.B. 3, and Indiana’s S.B. 17. Other state bills claim not to require age verification, but still threaten platforms with liability for showing certain content or features to minor users. These bills—including Mississippi’s H.B. 1126, Ohio’s Parental Notification by Social Media Operators Act, and the federal Kids Online Safety Act—raise the question: how are platforms to know which users are minors without imposing age verification?

EFF’s answer: they can’t. We call these bills “implicit age verification mandates” because, though they might expressly deny requiring age verification, they still force platforms to either impose age verification measures or, worse, to censor whatever content or features deemed “harmful to minors” for all users—not just young people—in order to avoid liability.

Age verification requirements are the wrong approach to protecting young people online. No one should have to hand over their most sensitive personal information or submit to invasive biometric surveillance just to access lawful online speech.

EFF’s Work Opposing State Age Verification Bills

Last year, we saw a slew of dangerous social media regulations for young people introduced across the country. This year, the flood of ill-advised bills grew larger. As of December 2024, nearly every U.S. state legislature has introduced at least one age verification bill, and nearly half the states have passed at least one of these proposals into law.

Courts agree with our position on age verification mandates. Across the country, courts have repeatedly and consistently held these so-called “child safety” bills unconstitutional, confirming that it is nearly impossible to impose online age-verification requirements without violating internet users’ First Amendment rights. In 2024, federal district courts in Ohio, Indiana, Utah, and Mississippi enjoined those states’ age verification mandates. The decisions underscore how these laws, in addition to being unconstitutional, are also bad policy. Instead of seeking to censor the internet or block young people from it, lawmakers seeking to help young people should focus on advancing legislation that solves the most pressing privacy and competition problems for all users—without restricting their speech.

Here’s a quick review of EFF’s work this year to fend off state age verification mandates and protect digital rights in the face of this legislative onslaught.

California

In January, we submitted public comments opposing an especially vague and poorly written proposal: California Ballot Initiative 23-0035, which would allow plaintiffs to sue online information providers for damages of up to $1 million if they violate their “responsibility of ordinary care and skill to a child.” We pointed out that this initiative’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults, and cause platforms to censor user content and impose mandatory age verification in order to avoid this legal risk. Thankfully, this measure did not make it onto the 2024 ballot.

In February, we filed a friend-of-the-court brief arguing that California’s Age Appropriate Design Code (AADC) violated the First Amendment. Our brief asked the Ninth Circuit Court of Appeals to rule narrowly that the AADC’s age estimation scheme and vague description of “harmful content” renders the entire law unconstitutional, even though the bill also contained several privacy provisions that, stripped of the unconstitutional censorship provisions, could otherwise survive. In its decision in August, the Ninth Circuit confirmed that parts of the AADC likely violate the First Amendment and provided a helpful roadmap to legislatures for how to write privacy first laws that can survive constitutional challenges. However, the court missed an opportunity to strike down the AADC’s age-verification provision specifically.

Later in the year, we also filed a letter to California lawmakers opposing A.B. 3080, a proposed state bill that would have required internet users to show their ID in order to look at sexually explicit content. Our letter explained that bills that allow politicians to define what “sexually explicit” content is and enact punishments for those who engage with it are inherently censorship bills—and they never stop with minors. We declared victory in September when the bill failed to get passed by the legislature.

New York

Similarly, after New York passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act earlier this year, we filed comments urging the state attorney general (who is responsible for writing the rules to implement the bill) to recognize that that age verification requirements are incompatible with privacy and free expression rights for everyone. We also noted that none of the many methods of age verification listed in the attorney general’s call for comments is both privacy-protective and entirely accurate, as various experts have reported.

Texas

We also took the fight to Texas, which passed a law requiring all Texas internet users, including adults, to submit to invasive age verification measures on every website deemed by the state to be at least one-third composed of sexual material. After a federal district court put the law on hold, the Fifth Circuit reversed and let the law take effect—creating a split among federal circuit courts on the constitutionality of age verification mandates. In May, we filed an amicus brief urging the U.S. Supreme Court to grant review of the Fifth Circuit’s decision and to ultimately overturn the Texas law on First Amendment grounds.

In September, after the Supreme Court accepted the Texas case, we filed another amicus brief on the merits. We pointed out that the Fifth Circuit’s flawed ruling diverged from decades of legal precedent recognizing, correctly, that online ID mandates impose greater burdens on our First Amendment rights than in-person age checks. We explained that there is nothing about this Texas law or advances in technology that would lessen the harms that online age verification mandates impose on adults wishing to exercise their constitutional rights. The Supreme Court has set this case, Free Speech Coalition v. Paxton, for oral argument in February 2025.

Mississippi

Finally, we supported the First Amendment challenge to Mississippi’s age verification mandate, H.B. 1126, by filing amicus briefs both in the federal district court and on appeal to the Fifth Circuit. Mississippi’s extraordinarily broad law requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from exposure to materials deemed “harmful” by state officials.

In our June brief for the district court, we once again explained that online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their IDs in physical spaces, and impose significant barriers on adults’ ability to access lawful speech online. The district court agreed with us, issuing a decision that enjoined the Mississippi law and heavily cited our amicus brief.

Upon Mississippi’s appeal to the Fifth Circuit, we filed another amicus brief—this time highlighting H.B. 1126’s dangerous impact on young people’s free expression. After all, minors enjoy the same First Amendment right as adults to access and engage in protected speech online, and online spaces are diverse and important spaces where minors can explore their identities—whether by creating and sharing art, practicing religion, or engaging in politics—and seek critical resources and support for the very same harms these bills claim to address. In our brief, we urged the court to recognize that age-verification regimes like Mississippi’s place unnecessary and unconstitutional barriers between young people and these online spaces that they rely on for vibrant self-expression and crucial support.

Looking Ahead

As 2024 comes to a close, the fight against online age verification is far from over. As the state laws continue to proliferate, so too do the legal challenges—several of which are already on file.

EFF’s work continues, too. As we move forward in state legislatures and courts, at the federal level here in the United States, and all over the world, we will continue to advocate for policies that protect the free speech, privacy, and security of all users—adults and young people alike. And, with your help, we will continue to fight for the future of the open internet, ensuring that all users—especially the youth—can access the digital world without fear of surveillance or unnecessary restrictions.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Federal Regulators Limit Location Brokers from Selling Your Whereabouts: 2024 in Review

The opening and closing months of 2024 saw federal enforcement against a number of location data brokers that track and sell users’ whereabouts through apps installed on their smartphones. In January, the Federal Trade Commission brought successful enforcement actions against X-Mode Social and InMarket, banning the companies from selling precise location data—a first prohibition of this kind for the FTC. And in December, the FTC widened its net to two additional companies—Gravy Analytics (Venntel) and Mobilewalla—barring them from selling or disclosing location data on users visiting sensitive areas such as reproductive health clinics or places of worship. In previous years, the FTC has sued location brokers such as Kochava, but the invasive practices of these companies have only gotten worse. Seeing the federal government ramp up enforcement is a welcome development for 2024.

As regulators have clearly stated, location information is sensitive personal information. Companies can glean location information from your smartphone in a number of ways. Apps that include Software Development Kits (SDKs) from some companies will instruct the app to send back troves of sensitive information for analytical insights or debugging purposes. The data brokers may offer market insights or financial incentives for app developers to include their SDKs. Other companies will not ask apps to directly include their SDKs, but will participate in Real-Time Bidding (RTB) auctions, placing bids for ad-space on devices in locations they specify. Even if they lose the auction, they can glean valuable device location information just by participating. Often, apps will ask for permissions such as location data for legitimate reasons aligned with the purpose of the app: for example, a price comparison app might use your whereabouts to show you the cheapest vendor of a product you’re interested in for your area. What you aren’t told is that your location is also shared with companies tracking you.

A number of revelations this year gave us better insight into how the location data broker industry works, revealing the inner-workings of powerful tools such as Locate X, which allows even those claiming to work with law enforcement at some point in the future to access troves of mobile location data across the planet. The mobile location tracking company FOG Data Science, which in 2022 EFF revealed to be selling troves of information to local police, was this year found also to be soliciting law enforcement for information on the doctors of suspects in order to track them via their doctor visits.

A number of revelations this year gave us better insight into how the location data broker industry works

EFF detailed how these tools can be stymied via technical means, such as changing a few key settings on your mobile device to disallow data brokers from linking your location across space and time. We further outlined legislative avenues to ensure structural safeguards are put in place to protect us all from an out-of-control predatory data industry.

In addition to FTC action, the Consumer Financial Protection Bureau proposed a new rule meant to crack down on the data broker industry. As the CFPB mentioned, data brokers compile highly sensitive information—like information about a consumer's finances, the apps they use, and their location throughout the day. The rule would include stronger consent requirements and protections for personal data that has been purportedly de-identified. Given the abuses the announcement cites, including the distribution and sale of “detailed personal information about military service members, veterans, government employees, and other Americans,” we hope to see adoption and enforcement of this proposed rule in 2025.

This year has seen a strong regulatory appetite to protect consumers from harms which in bygone years would have seemed unimaginable: detailed records on the movements of nearly everyone, packaged and made available for pennies. We hope 2025 continues this appetite to address the dangers of location data brokers.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Exposing Surveillance at the U.S.-Mexico Border: 2024 Year in Review in Pictures

Some of the most picturesque landscapes in the United States can be found along the border with Mexico. Yet, from San Diego’s beaches to the Sonoran Desert, from Big Bend National Park to the Boca Chica wetlands, we see vistas marred by the sinister spread of surveillance technology, courtesy of the federal government.  

EFF refuses to let this blight grow without documenting it, exposing it, and finding ways to fight back alongside the communities that live in the shadow of this technological threat to human rights.  

Here’s a galley of images representing our work and the new developments we’ve discovered in border surveillance in 2024.  

1. Mapping Border Surveillance  

A map of the US-Mexico border, with dots representing surveillance towers.

EFF’s stand-up display of surveillance at the US-Mexico border. Source: EFF

EFF published the first iteration of our map of surveillance towers at the U.S.-Mexico border in Spring 2023, having pinpointed the precise location of 290 towers, a fraction of what we knew might be out there. A year-and-a -half later, with the help of local residents, researchers, and search-and-rescue groups, our map now includes more than 500 towers.  

In many cases, the towers are brand new, with some going up as recently as this fall. We’ve also added the location of surveillance aerostats, checkpoint license plate readers, and face recognition at land ports of entry. 

In addition to our online map, we also created a 10’ x 7’ display that we debuted at “Regardless of Frontiers: The First Amendment and the Exchange of Ideas Across Borders,” a symposium held by the Knight First Amendment Institute at Columbia University in October. If your institution would be interested in hosting it, please email us at aos@eff.org

2. Infrastructures of Control

An overhead view of a University of Arizona courtyard with a model surveillance tower and mounted photographs of surveillance technology. A person looks at one of the displays.

The Infrastructures of Control exhibit at University of Arizona. Source: EFF

Two University of Arizona geographers—Colter Thomas and Dugan Meyer—used our map to explore the border, driving on dirt roads and hiking in the desert, to document the infrastructure that comprises the so-called “virtual wall.” The result: “Infrastructures of Control,” a photography exhibit in April at the University of Arizona that also included a near-actual size replica of an “autonomous surveillance tower.”   

You can read our interview with Thomas and Meyer here.

3. An Old Tower, a New Lease in Calexico 

A surveillance tower over a one-story home.

A remote video surveillance system in Calexico, Calif. Source: EFF

Way back in 2000, the Immigration and Naturalization Service—which oversaw border security prior to the creation of Customs and Border Protection (CBP) within the Department of Homeland Security (DHS) — leased a small square of land in a public park in Calexico, Calif., where it then installed one of the earliest border surveillance towers. The lease lapsed in 2020 and with plans for a massive surveillance upgrade looming, CBP rushed to try to renew the lease this year. 

This was especially concerning because of CBP’s new strategy of combining artificial intelligence with border camera feeds.  So EFF teamed up with the Imperial Valley Equity and Justice Coalition, American Friends Service Committee, Calexico Needs Change, and Southern Border Communities Coalition to try to convince the Calexico City Council to either reject the lease or demand that CBP enact better privacy protections for residents in the neighboring community and children playing in Nosotros Park. Unfortunately, local politics were not in our favor. However, resisting border surveillance is a long game, and EFF considers it a victory that this tower even got a public debate at all. 

4. Aerostats Up in the Air 

A white blimp on the ground in the middle of the desert.

The Tactical Aerostat System at Santa Teresa Station. Source: Battalion Search and Rescue (CC BY)

CBP seems incapable of developing a coherent strategy when it comes to tactical aerostats—tethered blimps equipped with long-range, high-definition cameras. In 2021, the agency said it wanted to cancel the program, which involved four aerostats in the Rio Grande Valley, before reversing itself. Then in 2022, CBP launched new aerostats in Nogales, Ariz. and Columbus, N.M. and announced plans to launch 17 more within a year.  

But by 2023, CBP had left the program out of its proposed budget, saying the aerostats would be decommissioned. 

And yet, in fall 2024, CBP launched a new aerostat at the Santa Teresa Border Patrol Station in New Mexico. Our friends at Battalion Search & Rescue gathered photo evidence for us. Soon after, CBP issued a new solicitation for the aerostat program and a member of Congress told Border Report that the aerostats may be upgraded and as many as 12 new ones may be acquired by CBP via the Department of Defense.

Meanwhile, one of CBP’s larger Tethered Aerostats Radar Systems in Eagle Pass, Texas was down for most of the year after deflating in high winds. CBP has reportedly not been interested in paying hundreds of thousands of dollars to get it up again.   

5. New Surveillance in Southern Arizona

A desert scene with a rust-colored pole surrounded by razor wire.

A Buckeye Camera on a pole along the border fence near Sasabe, Ariz. Source: EFF

Buckeye Cameras are motion-triggered cameras that were originally designed for hunters and ranchers to spot wildlife, but border enforcement authorities—both federal and state/local—realized years ago that they could be used to photograph people crossing the border. These cameras are often camouflaged (e.g. hidden in trees, disguised as garbage, or coated in sand).  

Now, CBP is expanding their use of Buckeye Cameras. During a trip to Sasabe, Ariz., we discovered the CBP is now placing Buckeye Cameras in checkpoints, welding them to the border fence, and installing metal poles, wrapped in concertina wire, with Buckeye Cameras at the top.

A zoomed-in image of a surveillance tower on a desert hill.

A surveillance tower along the highway west of Tucson. Source: EFF

On that same trip to Southern Arizona, EFF (along with the Infrastructures of Control geographers) passed through a checkpoint west of Tucson, where previously we had identified a relocatable surveillance tower. But this time it was gone. Why, we wondered? Our question was answered just a minute or two later, when we spotted a new surveillance tower on a nearby hill-top, a new model that we had not previously seen deployed in the wild.  

6. Artificial Intelligence  

An overly complicated graphic of surveillance towers, blimps, and other technoogies

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

CBP and other agencies regularly hold “Industry Days” to brief contractors on the new technology and capabilities the agency may want to buy in the near future. In January, EFF attended one such  Industry Day” designed to bring tech vendors up-to-speed on the government’s horrific vision of a border secured by artificial intelligence (see the graphic above for an example of that vision). 

A complicated graphic illustrating how technology tracks someone crossing the border.

A graphic from a January 2024 “Industry Day” event. Source: Custom & Border Protection

At this event, CBP released the convoluted flow chart above as part of slide show. Since it’s so difficult to parse, here’s the best sense we can make out of it:. When someone crosses the border, it triggers an unattended ground sensor (UGS), and then a camera autonomously detects, identifies, classifies and tracks the person, handing them off camera to camera, and the AI system eventually alerts Border Patrol to dispatch someone to intercept them for detention. 

7. Congress in Virtual Reality

A member of Congress sitting at a table with a white Meta Quest 2 headset strapped to his head.

Rep. Scott Peters on our VR tour of the border. Source: Peters’ Instagram

We search for surveillance on the ground. We search for it in public records. We search for it in satellite imagery. But we’ve also learned we can use virtual reality in combination with Google Streetview not only to investigate surveillance, but also to introduce policymakers to the realities of the policies they pass. This year, we gave Rep. Scott Peters (D-San Diego) and his team a tour of surveillance at the border in VR, highlighting the impact on communities.  

“[EFF] reminded me of the importance of considering cost-effectiveness and Americans’ privacy rights,” Peters wrote afterward in a social media. 

We also took members of Rep. Mark Amodei’s (R-Reno) district staff on a similar tour. Other Congressional staffers should contact us at aos@eff.org if you’d like to try it out.  

Learn more about how EFF uses VR to research the border in this interview and this lightning talk 

8. Indexing Border Tech Companies 

A militarized off-road vehicle in an exhibition hall.

An HDT Global vehicle at the 2024 Border Security Expo. Source: Dugan Meyer (CC0 1.0 Universal)

In partnership with the Heinrich Böll Foundation, EFF and University of Nevada, Reno student journalist Andrew Zuker built a dataset of hundreds of vendors marketing technology to the U.S. Department of Homeland Security. As part of this research, Zuker journeyed to El Paso, Texas for the Border Security Expo, where he systematically gathered information from all the companies promoting their surveillance tools. You can read Zuker’s firsthand report here.

9. Plataforma Centinela Inches Skyward 

A small surveillance trailer outside a retail store in Ciudad Juarez.

An Escorpión unit, part of the state of Chihuahua’s Plataforma Centinela project. Source: EFF

In fall 2023, EFF released its report on the Plataforma Centinela, a massive surveillance network being built by the Mexican state of Chihuahua in Ciudad Juarez that will include 10,000+ cameras, face recognition, artificial intelligence, and tablets that police can use to access all this data from the field. At its center is the Torre Centinela, a 20-story headquarters that was supposed to be completed in 2024 

A construction site with a crane and the first few floors of a skyscraper.

The site of the Torre Centinela in downtown Ciudad Juarez. Source: EFF

We visited Ciudad Juarez in May 2024 and saw that indeed, new cameras had been installed along roadways, and the government had begun using “Escorpión” mobile surveillance units, but the tower was far from being completed. A reporter who visited in November confirmed that not much more progress had been made, although officials claim that the system will be fully operational in 2025.

10. EFF’s Border Surveillance Zine 

Two copies of the purple-themed Border Surveillance Technology zine.

Do you want to review even more photos of surveillance that can be found at the border, whether they’re planted in the ground, installed by the side of the road, or floating in the air? Download EFF’s new zine in English or Spanish—or if you live a work in the border region, email us as aos@eff.org and we’ll mail you hard copies.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting Automated Oppression: 2024 in Review

EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.

This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult.

If you train it on a biased dataset, you are creating a technology to automate systemic, historical injustice.

It’s important to note that decision makers tend to defer to ADMs or use them as cover to justify their own biases. And even though they are implemented to change how decisions are made by government officials, the adoption of an ADM is often considered a mere ‘procurement’ decision like buying a new printer, without the kind of public involvement that a rule change would ordinarily entail. This, of course, increases the likelihood that vulnerable members of the public will be harmed and that technologies will be adopted without meaningful vetting. While there may be positive use cases for machine learning to analyze government processes and phenomena in the world, making decisions about people is one of the worst applications of this technology, one that entrenches existing injustice and creates new, hard-to-discover errors that can ruin lives.

Vendors of ADM have been riding a wave of AI hype, and police, border authorities, and spy agencies have gleefully thrown taxpayer money at products that make it harder to hold them accountable while being unproven at offering any other ‘benefit.’ We’ve written about the use of generative AI to write police reports based on the audio from bodycam footage, flagged how national security use of AI is a threat to transparency, and called for an end to AI Use in Immigration Decisions.

The hype around AI and the allure of ADMs has further incentivized the collection of more and more user data.

The private sector is also deploying ADM to make decisions about people’s access to employment, housing, medicine, and more. People have an intuitive understanding of some of the risks this poses, with most Americans expressing discomfort about the use of AI in these contexts. Companies can make a quick buck firing people and demanding the remaining workers figure out how to implement snake-oil ADM tools to make these decisions faster, though it’s becoming increasingly clear that this isn’t delivering the promised productivity gains.

ADM can, however, help a company avoid being caught making discriminatory decisions that violate civil rights laws—one reason why we support mechanisms to prevent unlawful private discrimination using ADM. Finally, the hype around AI and the allure of ADMs has further incentivized the collection and monetization of more and more user data and more invasions of privacy online, part of why we continue to push for a privacy-first approach to many of the harmful applications of these technologies.

In EFF’s podcast episode on AI, we discussed some of the challenges posed by AI and some of the positive applications this technology can have when it’s not used at the expense of people’s human rights, well-being, and the environment. Unless something dramatically changes, though, using AI to make decisions about human beings is unfortunately doing a lot more harm than good.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

State Legislatures Are The Frontline for Tech Policy: 2024 in Review

State lawmakers are increasingly shaping the conversation on technology and innovation policy in the United States. As Congress continues to deliberate key issues such as data privacy, police use of data, and artificial intelligence, lawmakers are rapidly advancing their own ideas into state law. That’s why EFF fights for internet rights not only in Congress, but also in statehouses across the country.

This year, some of that work has been to defend good laws we’ve passed before. In California, EFF worked to oppose and defeat S.B. 1076, by State Senator Scott Wilk, which would have undermined the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to ask data brokers registered in California to remove their personal information. S.B. 1076 would have opened loopholes for data brokers to duck compliance with this common-sense, consumer-friendly tool. We were glad to stop it before it got very far.

Also in California, EFF worked with dozens of organizations led by ACLU California Action to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting. The bill would have made it easy for policy to evade accountability and we are glad to see the California legislature reject this dangerous bill. For the full rundown of our highlights and lowlights in California, you can check out our recap of this year’s session.

EFF also supported efforts from the ACLU of Massachusetts to pass the Location Shield Act, which, as introduced, would have required companies to get consent before collecting or processing location data and largely banned the sale of location data. While the bill did not become law this year, we look forward to continuing the fight to push it across the finish line in 2025.

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues.

States Continue to Experiment

Several states also introduced bills this year that raise similar issues as the federal Kids Online Safety Act, which attempts to address young people’s safety online but instead introduces considerable censorship and privacy concerns.

For example, in California, we were able to stop A.B. 3080, authored by Assemblymember Juan Alanis. We opposed this bill for many reasons, including that it was not clear on what counted as “sexually explicit content” under its definition. This vagueness set up barriers to youth—particularly LGBTQ+ youth—to access legitimate content online.

We also oppose any bills, including A.B. 3080, that require age verification to access certain sites or social media networks. Lawmakers filed bills that have this requirement in more than a dozen states. As we said in comments to the New York Attorney General’s office on their recently passed “SAFE for Kids Act,” none of the requirements the state was considering are both privacy-protective and entirely accurate. Age-verification requirements harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

We also continue to watch lawmakers attempting to regulate the creation and spread of deepfakes. Many of these proposals, while well-intentioned, are written in ways that likely violate First Amendment rights to free expression. In fact, less than a month after California’s governor signed a deepfake bill into law a federal judge put its enforcement on pause (via a preliminary injunction) on First Amendment grounds. We encourage lawmakers to explore ways to focus on the harms that deepfakes pose without endangering speech rights.

On a brighter note, some state lawmakers are learning from gaps in existing privacy law and working to improve standards. In the past year, both Maryland and Vermont have advanced bills that significantly improve state privacy laws we’ve seen before. The Maryland Online Data Privacy Act (MODPA)—authored by State Senator Dawn File and Delegate Sara Love (now State Senator Sara Love), contains strong data privacy minimization requirements. Vermont’s privacy bill, authored by State Rep. Monique Priestley, included the crucial right for individuals to sue companies that violate their privacy. Unfortunately, while the bill passed both houses, it was vetoed by Vermont Gov. Phil Scott. As private rights of action are among our top priorities in privacy laws, we look forward to seeing more bills this year that contain this important enforcement measure.

Looking Ahead to 2025

2025 will be a busy year for anyone who works in state legislatures. We already know that state lawmakers are working together on issues such as AI legislation. As we’ve said before, we look forward to being a part of these conversations and encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms. 

As deadlock continues in Washington D.C., state lawmakers will continue to emerge as leading voices on several key EFF issues. So, we’ll continue to work—along with partners at other advocacy organizations—to advise lawmakers and to speak up. We’re counting on our supporters and individuals like you to help us champion digital rights. Thanks for your support in 2024.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF’s 2023 Annual Report Highlights a Year of Victories: 2024 in Review

Every fall, EFF releases its annual report, and 2023 was the year of Privacy First. Our annual report dives into our groundbreaking whitepaper along with victories in freeing the law, right to repair, and more. It’s a great, easy-to-read summary of the year’s work, and it contains interesting tidbits about the impact we’ve made—for instance, did you know 394,000 people downloaded an episode of EFF’s Podcast, “How to Fix the Internet as of 2023?” Or that EFF had donors in 88 countries?

As you can see in the report, EFF’s role as the oldest, largest, and most trusted digital rights organization became even more important when tech law and policy commanded the public’s attention in 2023. Major headlines pondered the future of internet freedom. Arguments around free speech, digital privacy, AI, and social media dominated Congress, state legislatures, the U.S. Supreme Court, and the European Union.

EFF intervened with logic and leadership to keep bad ideas from getting traction, and we articulated solutions to legitimate concerns with care and nuance in our whitepaper, Privacy First: A Better Way to Protect Against Online Harms. It demonstrated how seemingly disparate concerns are in fact linked to the dominance of tech giants and the surveillance business models used by most of them. We noted how these business models also feed law enforcement’s increasing hunger for our data. We pushed for a comprehensive approach to privacy instead and showed how this would protect us all more effectively than harmful censorship strategies.  

The longest running fight we won in 2023 was to free the law: In our legal representation of PublicResource.org, we successfully ensured that copyright law does not block you from finding, reading and sharing laws, regulations and building codes online. We also won a major victory in helping to pass a law in California to increase tech users’ ability to control their information. In states across the nation, we helped boost the right to repair. Due to the efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 year, as much as 95% of the web is now encrypted. And that’s just barely scratching the surface.

Read the Report

Obviously, we couldn’t do any of this without the support of our members, large and small. Thank you. Take a look at the report for more information about the work we’ve been able to do this year thanks to your help.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Aerial and Drone Surveillance: 2024 in Review

We've been fighting against aerial surveillance for decades because we recognize the immense threat from Big Brother in the sky. Even if you’re behind within the confines of your backyard, you are exposed to eyes from above.

Aerial surveillance was first conducted with manned aircrafts, which the Supreme Court held was permissible without a warrant in a couple of cases the 1980s. But, as we’ve argued to courts, drones have changed the equation. Drones were a technology developed by the military before it was adopted by domestic law enforcement. And in the past decade, commercial drone makers began marketing to civilians, making drones ubiquitous in our lives and exposing us to be watched by from above by the government and our neighbors. But we believe that when we're in the constitutionally protected areas of backyards or homes, we have the right to privacy, no matter how technology has advanced. 

This year, we focused on fighting back against aerial surveillance facilitated by advancement in these technologies. Unfortunately, many of the legal challenges to aerial and drone surveillance are hindered by those Supreme Court cases. But, we argued that these cases decided around the same time as when people were playing Space Invaders on the Atari 2600 and watching the Goonies on VHS should not control the legality of conduct in the age of Animal Crossing and 4k streaming services. As nostalgic as those memories may be, laws from those times are just as outdated as 16k ram packs and magnetic videotapes. And we have applauded courts for recognizing that. 

Unfortunately, the Supreme Court has failed to update its understanding of aerial surveillance, even though other courts have found certain types of aerial surveillance to violate the federal and state constitutions.  

 Because of this ambiguity, law enforcement agencies across the nation have been quick to adopt various drone systems, especially those marketed as a “drone as first responder” program, which ostensibly allows police to assess a situation–whether it’s dangerous or requires police response at all–before officers arrive at the scene. Data from the Chula Vista Police Department in Southern California, which pioneered the model, shows that drones frequently respond to domestic violence, unspecified disturbances, and requests for psychological evaluations. Likewise, flight logs indicate the drones are often used to investigate crimes related to homelessness. The Brookhaven Police Department in Georgia also has adopted this model. While these programs sound promising in theory, municipalities have been reticent in sharing the data despite courts ruling that the information is not categorically closed to the public. 

Additionally, while law enforcement agencies are quick to assure the public that their policy respects privacy concerns, those can be hollow assurances. The NYPD promised that they would not surveil constitutionally protected backyards with drones, but Eric Adams decided to use to them to spy on backyard parties over Labor Day in 2023 anyway. Without strict regulations in place, our privacy interests are at the whims of whoever holds power over these agencies. 

Alarmingly, there are increasing numbers of calls by police departments and drone manufacturers to arm remote-controlled drones. After wide-spread backlash including resignations from its ethics board, drone manufacturer Axon in 2022 said it would pause a program to develop a drone armed with a taser to be deployed in school shooting scenarios. We’re likely to see more proposals like this, including drones armed with pepper spray and other crowd control weapons. 

As drones incorporate more technological payload and become cheaper, aerial surveillance has become a favorite surveillance tool resorted to by law enforcement and other governmental agencies. We must ensure that these technological developments do not encroach on our constitutional rights to privacy.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Restrictions on Free Expression and Access to Information in Times of Change: 2024 in Review

This was an historical year. A year in which elections took place in countries home to almost half the world’s population, a year of war, and collapse of or chaos within several governments. It was also a year of new technological developments, policy changes, and legislative developments. Amidst these sweeping changes, freedom of expression has never been more important, and around the world, 2024 saw numerous challenges to it. From new legal restrictions on speech to wholesale internet shutdowns, here are just a few of the threats to freedom of expression online that we witnessed in 2024.

Internet shutdowns

It is sadly not surprising that, in a year in which national elections took place in at least 64 countries, internet shutdowns would be commonplace. Access Now, which tracks shutdowns and runs the KeepItOn Coalition (of which EFF is a member), found that seven countries—Comoros, Azerbaijan, Pakistan, India, Mauritania, Venezuela, and Mozambique—restricted access to the internet at least partially during election periods. These restrictions inhibit people from being able to share news of what’s happening on the ground, but they also impede access to basic services, commerce, and communications.

Repression of speech in times of conflict

But elections aren’t the only justification governments use for restricting internet access. In times of conflict or protest, access to internet infrastructure is key for enabling essential communication and reporting. Governments know this, and over the past decades, have weaponized access as a means of controlling the free flow of information. This year, we saw Sudan enact a total communications blackout amidst conflict and displacement. The Iranian government has over the past two years repeatedly restricted access to the internet and social media during protests. And Palestinians in Gaza have been subject to repeated internet blackouts inflicted by Israeli authorities.

Social media platforms have also played a role in restricting speech this year, particularly when it comes to Palestine. We documented unjust content moderation by companies at the request of Israel’s Cyber Unit, submitted comment to Meta’s Oversight Board on the use of the slogan “from the river to the sea” (which the Oversight Board notably agreed with), and submitted comment to the UN Special Rapporteur on Freedom of Expression and Opinion expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies.

In our efforts to ensure free expression is protected online, we collaborated with numerous groups and coalitions in 2024, including our own global content moderation coalition, the Middle East Alliance for Digital Rights, the DSA Human Rights Alliance, EDRI, and many others.

Restrictions on content, age, and identity

Another alarming 2024 trend was the growing push from several countries to restrict access to the internet by age, often by means of requiring ID to get online, thus inhibiting people’s ability to identify as they wish. In Canada, an overbroad age verification bill, S-210, seeks to prevent young people from encountering sexually explicit material online, but would require all users to submit identification before going online. The UK’s Online Safety Act, which EFF has opposed since its first introduction, would also require mandatory age verification, and would place penalties on websites and apps that host otherwise-legal content deemed “harmful” by regulators to minors. And similarly in the United States, the Kids Online Safety Act (still under revision) would require companies to moderate “lawful but awful” content and subject users to privacy-invasive age verification. And in recent weeks, Australia has also enacted a vague law that aims to block teens and children from accessing social media, marking a step back for free expression and privacy.

While the efforts of these governments are to ostensibly protect children from harm, as we have repeatedly demonstrated, they can also cause harm to young people by preventing them from accessing information that is otherwise not taught in schools or otherwise accessible in their communities.  

One group that is particularly impacted by these and other regulations enacted by governments around the world is the LGBTQ+ community. In June, we noted that censorship of online LGBTQ+ speech is on the rise in a number of countries. We continue to keep a close watch on governments that seek to restrict access to vital information and communications.

Cybercrime

We’ve been pushing back against cybercrime laws for a long time. In 2024, much of that work focused on the UN Cybercrime Convention, a treaty that would allow states to collect evidence across borders in cybercrime cases. While that might sound acceptable to many readers, the problem is that numerous countries utilize “cybercrime” as a means of punishing speech. One such country is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

EFF has fought back against Jordan’s cybercrime law, as well as bad cybercrime laws in China, Russia, the Philippines, and elsewhere, and we will continue to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Cars (and Drivers): 2024 in Review

If you’ve purchased a car made in the last decade or so, it’s likely jam-packed with enough technology to make your brand new phone jealous. Modern cars have sensors, cameras, GPS for location tracking, and more, all collecting data—and it turns out in many cases, sharing it.

Cars Sure Are Sharing a Lot of Information

While we’ve been keeping an eye on the evolving state of car privacy for years, everything really took off after a New York Times report this past March found that the car maker G.M. was sharing information about driver’s habits with insurance companies without consent.

It turned out a number of other car companies were doing the same by using deceptive design so people didn’t always realize they were opting into the program. We walked through how to see for yourself what data your car collects and shares. That said, cars, infotainment systems, and car maker’s apps are so unstandardized it’s often very difficult for drivers to research, let alone opt out of data sharing.

Which is why we were happy to see Senators Ron Wyden and Edward Markey send a letter to the Federal Trade Commision urging it to investigate these practices. The fact is: car makers should not sell our driving and location history to data brokers or insurance companies, and they shouldn’t make it as hard as they do to figure out what data gets shared and with whom.

Advocating for Better Bills to Protect Abuse Survivors

The amount of data modern cars collect is a serious privacy concern for all of us. But for people in an abusive relationship, tracking can be a nightmare.

This year, California considered three bills intended to help domestic abuse survivors endangered by vehicle tracking. Of those, we initially liked the approach behind two of them, S.B. 1394 and S.B. 1000. When introduced, both would have served the needs of survivors in a wide range of scenarios without inadvertently creating new avenues of stalking and harassment for the abuser to exploit. They both required car manufacturers to respond to a survivor's request to cut an abuser's remote access to a car's connected services within two business days. To make a request, a survivor had to prove the vehicle was theirs to use, even if their name was not on the loan or title.

But the third bill, A.B. 3139, took a different approach. Rather than have people submit requests first and cut access later, this bill required car manufacturers to terminate access immediately, and only require some follow-up documentation up to seven days later. Likewise, S.B. 1394 and S.B. 1000 were amended to adopt this "act first, ask questions later" framework. This approach is helpful for survivors in one scenario—a survivor who has no documentation of their abuse, and who needs to get away immediately in a car owned by their abuser. Unfortunately, this approach also opens up many new avenues of stalking, harassment, and abuse for survivors. These bills ended up being combined into S.B. 1394, which retained some provisions we remain concerned about.

It’s Not Just the Car Itself

Because of everything else that comes with car ownership, a car is just one piece of the mobile privacy puzzle.

This year we fought against A.B. 3138 in California, which proposed adding GPS technology to digital license plates to make them easier to track. The bill passed, unfortunately, but location data privacy continues to be an important issue that we’ll fight for.

We wrote about a bulletin released by the U.S. Cybersecurity and Infrastructure Security Agency about infosec risks in one brand of automated license plate readers (ALPRs). Specifically, the bulletin outlined seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials. The sheer scale of this vulnerability is alarming: EFF found that just 80 agencies in California, using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates.

Finally, in order to drive a car, you need a license, and increasingly states are offering digital IDs. We dug deep into California’s mobile ID app, wrote about the various issues with mobile IDs— which range from equity to privacy problems—and put together an FAQ to help you decide if you’d even benefit from setting up a mobile ID if your state offers one. Digital IDs are a major concern for us in the coming years, both due to the unanswered questions about their privacy and security, and their potential use for government-mandated age verification on the internet.

The privacy problems of cars are of increasing importance, which is why Congress and the states must pass comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. While we tend to think of data privacy laws as dealing with computers, phones, or IoT devices, they’re just as applicable, and increasingly necessary, for cars, too.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

❌