Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 26 décembre 2024Electronic Frontier Foundation

Surveillance Self-Defense: 2024 in Review

26 décembre 2024 à 10:39

This year, we celebrated the 15th anniversary of our Surveillance-Self Defense (SSD) guide. How’d we celebrate? We kept at it—continuing to work on, refine, and update one of the longest running security and privacy guides on the internet.

Technology changes quickly enough as it is, but so does the language we use to describe that technology. In order for SSD to thrive, it needs careful attention throughout the year. So, we like to think of SSD as a garden, always in need of a little watering, maybe some trimming, and the occasional mowing down of dead technologies. 

Brushing Up on the Basics

A large chunk of SSD exists to explain concepts around digital security in the hopes that you can take that knowledge to make your own decisions about your specific needs. As we often say, security is a mindset, not a purchase. But in order to foster that mindset, you need some basic knowledge. This year, we set out to refine some of this guidance in the hopes of making it easier to read and useful for a variety of skill levels. The guides we updated included:

Big Guides, Big (and Small) Changes

If you’re looking for something a bit longer, then some of our more complicated guides are practically novels. This year, we updated a few of these.

We went through our Privacy Breakdown of Mobile Phones and updated it with more recent examples when applicable, and included additional tips at the end of some sections for actionable steps you can take. Phones continue to be one of the most privacy-invasive devices we own, and getting a handle on what they’re capable of is the first step to figuring out what risks you may face.

Our Attending a Protest guide is something we revisit every year (sometimes a couple times a year) to ensure it’s as accurate as possible. This year was no different, and while there were no sweeping changes, we did update the included PDF guide and added screenshots where applicable.

We also reworked our How to: Understand and Circumvent Network Censorship slightly to frame it more as instructional guidance, and included new features and tools to get around censorship, like utilizing a proxy in messaging tools.

New Guides

We saw two additions to the SSD this year. First up was How to: Detect Bluetooth Trackers, our guide to locating unwanted Bluetooth trackers—like Apple AirTags or Tile—that someone may use to track your location. Both Android and iOS have made changes to detecting these sorts of trackers, but the wide array of different products on the market means it doesn’t always work as expected.

We also put together a guide for the iPhone’s Lockdown Mode. While not a feature that everyone needs to consider, it has proven helpful in some cases, and knowing what those circumstances are is an important step in deciding if it’s a feature you need to enable.  

But How do I?

As the name suggests, our Tool Guides are all about learning how to best protect what you do on your devices. This might be setting up two-factor authentication, turning on encryption on your laptop, or setting up something like Apple’s Advanced Data Protection. These guides tend to need a yearly look to ensure they’re up-to-date. For example, Signal saw the launch of usernames, so we went in and made sure that was added to the guide. Here’s what we updated this year:

And Then There Were Blogs

Surveillance Self-Defense isn’t just a website, it’s also a general approach to privacy and security. To that end, we often use our blog to tackle more specific questions or respond to news.

This year, we talked about the risks you might face using your state’s digital driver’s license, and whether or not the promise of future convenience is worth the risks of today.

We dove into an attack method in VPNs called TunnelVision, which showed how it was possible for someone on a local network to intercept some VPN traffic. We’ve reiterated our advice here that VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

Location data privacy is still a major issue this year, with potential and horrific abuses of this data popping up in the news constantly. We showed how and why you should disable location sharing in apps that don’t need access to function.

As mentioned above, our SSD on protesting is a perennial always in need of pruning, but sometimes you need to plant a whole new flower, as was the case when we decided to write up tips for protesters on campuses around the United States.

Every year, we fight for more privacy and security, but until we get that, stronger controls of our data and a better understanding of how technology works is our best defense.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review

For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe. 

The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content. 

For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.  

As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.  

The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.  

The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.  

Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem. 

As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?

Fairness Regulation: Ambition and High-Stakes Clashes 

There’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true. 

Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.  

Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European. 

In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise. 

The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent. 

In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again. 

Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.  

For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”

Platform regulation: A delicate balance 

For regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?  

EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.  

In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance! 

But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship. 

In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU. 

Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.  

The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Celebrating Digital Freedom with EFF Supporters: 2024 in Review

26 décembre 2024 à 10:33

“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.” It can be a tough job. A lot of our time is spent fighting bad things that are happening in the world or fixing things that have been broken for a long time.

But this work is important, and we've accomplished great things this year! Thanks to your help, we pushed the USPTO to withdraw harmful patent review proposals, fought for the public's right to access police drone footage, and continue to see more and more of the web encrypted thanks to Certbot and Let’s Encrypt.

Of course, the biggest reason EFF is able to fight for privacy and free expression online is support from EFF members. Public support is not only the reason we can operate but is also a great motivator to wake up and advocate for what’s right—especially when we get to hang out with some really cool folks! And with that, I’d like to reminisce.

EFF's Bay Area Festivities

Early in the year we held our annual Spring Members’ Speakeasy. We invited supporters in the Bay Area to join us at Babylon Burning, where all of EFF’s t-shirts, hoodies, and much of our swag are made. There, folks got a fun opportunity to hand print their own tote bag! It was a fun opportunity to see t-shirts that even I had never seen before. Side note, EFF has a lot of mechas on members’ t-shirts.

Vintage EFF t-shirts hung across the walls at Babylon Burning

Vintage EFF t-shirts hung across the walls at Babylon Burning.

The EFF team had a great time with EFF supporters at events throughout the year. Of course, my mind was blown seeing the questions EFF gamemasters (including the Cybertiger) came up with for both Tech Trivia and Cyberlaw Trivia. What was even more impressive was seeing how many answers teams got right at both events. During Cyberlaw Trivia, one team was able to recite 22 digits of pi, winning the tiebreaker question and the coveted first place prize!

Beating the Heat in Las Vegas

EFF staff with the Uber Contributor Award

EFF staff with the Uber Contributor Award.

Next, one of my favorite summer pastimes beating the heat in Las Vegas, where we get to see thousands of EFF supporters for the summer security conferences—BSidesLV, Black Hat, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week. The support EFF receives during the summer security conferences always amazes me, and it’s a joy to say hi to everyone that stops by to see us. We received an award from DEF CON and even speed ran a legal case, ensuring a security researchers' ability to give their talk at the conference.

While the lawyers were handling the legal case at DEF CON, a subgroup of us had a blast participating in the EFF Benefit Poker Tournament. Fourty-six supporters and friends played for money, glory, and the future of the web—all while using these new EFF playing cards! In the end, only one winner could beat the celebrity guests, including Cory Doctorow and Deviant (even winning the literal shirt off of Deviant's back).

EFFecting Change

This year we also launched a new livestream series: EFFecting Change. With our initial three events, we covered recent Supreme Court cases and how they affect the internet, keeping yourself safe when seeking reproductive care, and how to protest with privacy in mind. We’ve seen a lot of support for these events and are excited to continue them next year. Oh, and no worries if you missed one—they’re all recorded here!

Congrats to Our 2024 EFF Award Winners

We wanted to end the year in style, of course, with our annual EFF Awards. This year we gave awards to 404 Media, Carolina Botero, and Connecting Humanity—and you can watch the keynote if you missed it. We’re grateful to honor and lift up the important work of these award winners.

EFF staff and EFF Award Winners holding their trophies

EFF staff and EFF Award Winners holding their trophies.

And It's All Thanks to You

There was so much more to this year too. We shared campfire tales from digital freedom legends, the Encryptids; poked fun at bogus copyright law with our latest membership t-shirt; and hosted even more events throughout the country.

As 2025 approaches, it’s important to reflect on all the good work that we’ve done together in the past year. Yes, there’s a lot going on in the world, and times may be challenging, but with support from people like you, EFF is ready to keep up the fight—no matter what.

Many thanks to all of the EFF members who joined forces with us this year! If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Fighting For Progress On Patents: 2024 in Review

Par : Joe Mullin
25 décembre 2024 à 10:34

The rights we have in the offline world–to speak freely, create culture, play games, build new things and do business–must be available to us online, as well. This core belief drives EFF’s work to fight the misuse of the patent system. 

Despite significant progress we’ve made over the last decade, patents, and in particular vague software patents, remain a serious threat to online rights. The median patent lawsuit isn't filed by what Americans would recognize as an ‘inventor,’ but by an anonymous limited liability company that provides no products or services, and instead uses patents to threaten others over alleged infringement. In other words, a patent troll. In the tech sector, more than 85% of patent lawsuits are filed by these “non-practicing entities.” 

That’s why at EFF, we continue to  help individuals and organizations fight patent threats related to everyday activities like using CAPTCHAs and picture menus, tracking packages or vehiclesteaching languagesholding online contests, or playing simple games online

Here’s where the fight stands as we move into 2025. 

Defending the Public’s Right To Challenge Bad Patents

In 2012, recognizing the persistent problem of an overburdened patent office issuing a countless number dubious patents each year, Congress established a system called “inter partes reviews” (IPRs) to review and challenge patents. While far from perfect, IPRs have led to the cancellation of thousands of patents that should never have been granted in the first place. 

It’s no surprise that big patent owners and patent trolls have long sought to dismantle the IPR system. After unsuccessful attempts to persuade federal courts to dismantle IPRs, they shifted tactics in the past 18 months, attempting to convince the U.S. Patent and Trademark Office (USPTO) to undermine the IPR system by changing the rules on who can use it. 

EFF opposed these proposed changes, urging our supporters to file public comments. This effort was a resounding success. After reviewing thousands of comments, including nearly 1,000 inspired by EFF’s call to action, the USPTO withdrew its proposal

Stopping Congress From Re-Opening The Door To The Worst Patents 

The patent system, particularly in the realm of software, is broken. For more than 20 years, the U.S. Patent Office has issued patents on basic cultural or business practices, often with little more than the addition of computer jargon or trivial technical elements. 

The Supreme Court addressed this issue a decade ago with its landmark decision in a case called Alice v. CLS Bank, ruling that simply adding computer language to these otherwise generic patents isn’t enough to make them valid. However, Alice hasn’t fully protected us from patent trolls. Even with this decision, the cost of challenging a patent can run into hundreds of thousands of dollars, enabling patent trolls to make “nuisance” demands for amounts of $100,000 or less. But Alice has dampened the severity and frequency of patent troll claims, and allowed for many more businesses to fight back when needed. 

So we weren’t surprised when some large patent owners tried again this year to overturn Alice, with the introduction of the Patent Eligibility Restoration Act (PERA), which would bring the worst patents back into the system. PERA would also have overturned the Supreme Court ruling that prevents the patenting of human genes. EFF opposed PERA at every stage, and late this year, its supporters abandoned their efforts to pass it through the 118th Congress. We know they will try again next year–we’ll be ready. 

Shining Light On Secrecy In Patent Litigation

Litigation in the U.S is supposed to be transparent, particularly in patent cases involving technologies that impact millions of  internet users daily. Unfortunately, this is not always the case. In Entropic Communications LLC v. Charter Communications, filed in the U.S. District Court for the Eastern District of Texas, overbroad sealing of documents has obscured the case from public view. EFF intervened in the case to protect the public’s right to access federal court records, as the claims made by Entropic could have wide-reaching implications for anyone using cable modems to connect to the internet. 

Our work to ensure transparency in patent disputes is ongoing. In 2016, EFF intervened in another overly-sealed patent case in the Eastern District of Texas. In 2022, we did the same in California, securing an important transparency ruling. That same year, we supported a judge’s investigation into patent owners in Delaware, which ultimately resulted in referrals for criminal investigation. The judge’s actions were upheld on appeal this year. 

It remains far too easy for patent trolls to extort and exploit individuals and companies simply for creating or using software. In 2025, EFF will continue fighting for a patent system that’s open, fair, and transparent. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

We Stood Up for Access to the Law and Congress Listened: 2024 in Review

25 décembre 2024 à 10:34

For a while, ever since they lost in court, a number of industry giants have pushed a bill that purported to be about increasing access to the law. In fact, it would give them enormous power over the public ability to access, share, teach, and comment on the law.  

This sounds crazy—no one should be able to own the law. But these industry associations claim there’s a glaring exception to the rule: safety and building codes. The key distinction, they insist, is how these particular laws are developed. Often, when it comes to creating the best practices for an industry, a group of experts comes together to draft model standards. Many of those standards are then “incorporated by reference” into law, making them legal mandates just are surely as the U.S. tax code. 

But unlike most U.S. laws, the industry association that convene the experts claim that they own a copyright in the results, which means they get to control – and charge for—access to them. 

The consequences aren’t hard to imagine. If you are a journalist trying to figure out if a bridge that collapsed violated legal safety standards, you have to get the standards from the industry association, and pay for it. If you are renter who wants to know whether your apartment complies with the fire code, you face the same barrier.  And so on. 

Many organizations are working to remedy the situation, making standards available online for free (or, in some cases, for free but with a “premium” version that offers additional services on top). Courts around the country have affirmed their right to do so. 

Which brings us to the “Protecting and Enhancing Public Access to Codes Act” or “Pro Codes.” The Act requires industry associations to make standards incorporated by reference into law available for free to the public. But here’s the kicker – in exchange Congress will affirm that they have a legitimate copyright in those laws.    

This is bad deal for the public. First, access will mean read-only, and subject to licensing limits.  We already know what that looks like: currently the associations that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you can look at the law (as long as you aren’t print-disabled and you know exactly what to look for), but you can’t share it, compare it, or comment on it. That’s fundamentally against the public interest, as many have said. It gives private parties a windfall to do badly what others, like EFF client Public Resources Online, already do better and for free. 

Second, it’s solving a nonexistent problem. The many volunteers who develop these codes neither need nor want a copyright incentive. The industry associations don’t need it either—they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into law.   

Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the law.   

We’re pleased that members of Congress have recognized the many problems with this law. Many of you wrote to your members to raise concerns and when it was brought to a vote in committee, members registered those concerns. While it passed out of the House Judiciary Committee, the House of Representatives was asked to vote on the law “on suspension,” meaning it can avoid debate and become law if two-thirds of the House vote yes on it. In theory, it’s meant to make it easier to pass uncontroversial laws. 

Because you wrote in, because experts sent letters explaining the problems, enough members of Congress recognized that Pro Codes is not uncontroversial. It is not a small deal to allow industry giants to own parts of the law.  

This year, we are glad that so many people lent their time and energy to understanding the wolf in sheep’s clothing that the Pro Codes Act really was. And we hope that SDOs take note that they cannot pull the wool over everyone’s eyes. Not while we’re keeping watch.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Police Surveillance in San Francisco: 2024 in Review

25 décembre 2024 à 10:33

From a historic ban on police using face recognition, to landmark CCOPS legislation, to the first ban in the United States of police deploying deadly force via robot, for several years San Francisco has been leading the way on necessary reforms over how police use technology.

Unfortunately, 2024 was a far cry from those victories.

While EFF continues to fight for common sense police reforms in our own backyard, this year saw a change in city politics to something that was darker and more unaccountable than we’ve seen in awhile.

In the spring of this year, we opposed Proposition E, a ballot measure which allows the San Francisco Police Department (SFPD) to effectively experiment with any piece of surveillance technology for a full year without any approval or oversight. This gutted the 2019 Surveillance Technology Ordinance, which required city departments like the SFPD to obtain approval from the city’s elected governing body before acquiring or using specific surveillance technologies. We understood how dangerous Prop E was to democratic control and transparency, and even went as far as to fly a plane over San Francisco asking voters to reject the measure. Unfortunately, despite a strong opposition campaign, Prop E passed in the March 5, 2024 election.

Soon thereafter, we were reminded of the importance of passing democratic control and transparency laws at all levels of government, not just local. AB 481 is a California law requiring law enforcement agencies to get approval from their local elected governing body before purchasing military equipment, including drones. In the haste to purchase drones after Prop E passed, the SFPD knowingly violated this state law in order to begin purchasing more surveillance equipment. AB 481 has no real enforcement mechanism, which means concerned residents have to wave our arms around and implore the police to follow the law. But, we complained loudly enough that the California Attorney General’s office issued a bulletin reminding law enforcement agencies of their obligations under AB 481.  

EFF is an organization proudly based in San Francisco. Our fight to make it a place where technology aids, rather than hinders, safety and equity for all people will continue–even if that means calling attention to the SFPD’s casual law breaking or helping to defend the privacy laws that made this city a shining example of 21st century governance. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The Atlas of Surveillance Expands Its Data on Police Surveillance Technology: 2024 in Review

24 décembre 2024 à 14:05

EFF’s Atlas of Surveillance is one of the most useful resources for those who want to understand the use of police surveillance by local law enforcement agencies across the United States. This year, as the police surveillance industry has shifted, expanded, and doubled down on its efforts to win new cop customers, our team has been busily adding new spyware and equipment to this database. We also saw many great uses of the Atlas from journalists, students, and researchers, as well as a growing number of contributors. The Atlas of Surveillance currently captures more than 11,700 deployments of surveillance tech and remains the most comprehensive database of its kind. To learn more about each of the technologies, please check out our Street-Level Surveillance Hub, an updated and expanded version of which was released at the beginning of 2024.

Removing Amazon Ring

We started off with a big change: the removal of our set of Amazon Ring relationships with local police. In January, Amazon announced that it would no longer facilitate warrantless requests for doorbell camera footage through the company’s Neighbors app — a move EFF and other organizations had been calling on for years. Though police can still get access to Ring camera footage by getting a warrant– or through other legal means– we decided that tracking Ring relationships in the Atlas no longer served its purpose, so we removed that set of information. People should keep in mind that law enforcement can still connect to individual Ring cameras directly through access facilitated by Fusus and other platforms. 

Adding third-party platforms

In 2024, we added an important growing category of police technology: the third-party investigative platform (TPIP). This is a designation we created for the growing group of software platforms that pull data from other sources and share it with law enforcement, facilitating analysis of police and other data via artificial intelligence and other tools. Common examples include LexisNexis Accurint, Thomson Reuters Clear, and 

New Fusus data

404 Media released a report last January on the use of Fusus, an Axon system that facilitates access to live camera footage for police and helps funnel such feeds into real-time crime centers. Their investigation revealed that more than 200,000 cameras across the country are part of the Fusus system, and we were able to add dozens of new entries into the Atlas.

New and updated ALPR data 

EFF has been investigating the use of automated license plate readers (ALPRs) across California for years, and we’ve filed hundreds of California Public Records Act requests with departments around the state as part of our Data Driven project. This year, we were able to update all of our entries in California related to ALPR data. 

In addition, we were able to add more than 300 new law enforcement agencies nationwide using Flock Safety ALPRs, thanks to a data journalism scraping project from the Raleigh News & Observer. 

Redoing drone data

This year, we reviewed and cleaned up a lot of the data we had on the police use of drones (also known as unmanned aerial vehicles, or UAVs). A chunk of our data on drones was based on research done by the Center for the Study of the Drone at Bard College, which became inactive in 2020, so we reviewed and updated any entries that depended on that resource. 

We also added new drone data from Illinois, Minnesota, and Texas

We’ve been watching Drone as First Responder programs since their inception in Chula Vista, CA, and this year we saw vendors like Axon, Skydio, and Brinc make a big push for more police departments to adopt these programs. We updated the Atlas to contain cities where we know such programs have been deployed. 

Other cool uses of the Atlas

The Atlas of Surveillance is designed for use by journalists, academics, activists, and policymakers, and this was another year where people made great use of the data. 

The Atlas of Surveillance is regularly featured in news outlets throughout the country, including in the MIT Technology Review reporting on drones, and news from the Auburn Reporter about ALPR use in Washington. It also became the focus of podcasts and is featured in the book “Resisting Data Colonialism – A Practical Intervention.”

Educators and students around the world cited the Atlas of Surveillance as an important source in their research. One of our favorite projects was from a senior at Northwestern University, who used the data to make a cool visualization on surveillance technologies being used. At a January 2024 conference at the IT University of Copenhagen, Bjarke Friborg of the project Critical Understanding of Predictive Policing (CUPP) featured the Atlas of Surveillance in his presentation, “Engaging Civil Society.” The Atlas was also cited in multiple academic papers, including the Annual Review of Criminology, and is also cited in a forthcoming paper from Professor Andrew Guthrie Ferguson at American University Washington College of Law titled “Video Analytics and Fourth Amendment Vision. 


Thanks to our volunteers

The Atlas of Surveillance would not be possible without our partners at the University of Nevada, Reno’s Reynolds School of Journalism, where hundreds of students each semester collect data that we add to the Atlas. This year we also worked with students at California State University Channel Islands and Harvard University.

The Atlas of Surveillance will continue to track the growth of surveillance technologies. We’re looking forward to working with even more people who want to bring transparency and community oversight to police use of technology. If you’re interested in joining us, get in touch

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

The U.S. Supreme Court Continues its Foray into Free Speech and Tech: 2024 in Review

As we said last year, the U.S. Supreme Court has taken an unusually active interest in internet free speech issues over the past couple years.

All five pending cases at the end of last year, covering three issues, were decided this year, with varying degrees of First Amendment guidance for internet users and online platforms. We posted some takeaways from these recent cases.

We additionally filed an amicus brief in a new case before the Supreme Court challenging the Texas age verification law.

Public Officials Censoring Comments on Government Social Media Pages

Cases: O’Connor-Ratcliff v. Garnier and Lindke v. Freed – DECIDED

The Supreme Court considered a pair of cases related to whether government officials who use social media may block individuals or delete their comments because the government disagrees with their views. The threshold question in these cases was what test must be used to determine whether a government official’s social media page is largely private and therefore not subject to First Amendment limitations, or is largely used for governmental purposes and thus subject to the prohibition on viewpoint discrimination and potentially other speech restrictions.

The Supreme Court crafted a two-part fact-intensive test to determine if a government official’s speech on social media counts as “state action” under the First Amendment. The test includes two required elements: 1) the official “possessed actual authority to speak” on the government’s behalf, and 2) the official “purported to exercise that authority when he spoke on social media.” As we explained, the court’s opinion isn’t as generous to internet users as we asked for in our amicus brief, but it does provide guidance to individuals seeking to vindicate their free speech rights against government officials who delete their comments or block them outright.

Following the Supreme Court’s decision, the Lindke case was remanded back to the Sixth Circuit. We filed an amicus brief in the Sixth Circuit to guide the appellate court in applying the new test. The court then issued an opinion in which it remanded the case back to the district court to allow the plaintiff to conduct additional factual development in light of the Supreme Court's new state action test. The Sixth Circuit also importantly held in relation to the first element that “a grant of actual authority to speak on the state’s behalf need not mention social media as the method of speaking,” which we had argued in our amicus brief.

Government Mandates for Platforms to Carry Certain Online Speech

Cases: NetChoice v. Paxton and Moody v. NetChoice – DECIDED  

The Supreme Court considered whether laws in Florida and Texas violated the First Amendment because they allow those states to dictate when social media sites may not apply standard editorial practices to user posts. As we argued in our amicus brief urging the court to strike down both laws, allowing social media sites to be free from government interference in their content moderation ultimately benefits internet users. When platforms have First Amendment rights to curate the user-generated content they publish, they can create distinct forums that accommodate diverse viewpoints, interests, and beliefs.

In a win for free speech, the Supreme Court held that social media platforms have a First Amendment right to curate the third-party speech they select for and recommend to their users, and the government’s ability to dictate those processes is extremely limited. However, the court declined to strike down either law—instead it sent both cases back to the lower courts to determine whether each law could be wholly invalidated rather than challenged only with respect to specific applications of each law to specific functions. The court also made it clear that laws that do not target the editorial process, such as competition laws, would not be subject to the same rigorous First Amendment standards, a position EFF has consistently urged.

Government Coercion in Social Media Content Moderation

Case: Murthy v. Missouri – DECIDED

The Supreme Court considered the limits on government involvement in social media platforms’ enforcement of their policies. The First Amendment prohibits the government from directly or indirectly forcing a publisher to censor another’s speech (often called “jawboning”). But the court had not previously applied this principle to government communications with social media sites about user posts. In our amicus brief, we urged the court to recognize that there are both circumstances where government involvement in platforms’ policy enforcement decisions is permissible and those where it is impermissible.

Unfortunately, the Supreme Court did not answer the important First Amendment question before it—how does one distinguish permissible from impermissible government communications with social media platforms about the speech they publish? Rather, it dismissed the cases on “standing” because none of the plaintiffs had presented sufficient facts to show that the government did in the past or would in the future coerce a social media platform to take down, deamplify, or otherwise obscure any of the plaintiffs’ specific social media posts. Thus, while the Supreme Court did not tell us more about coercion, it did remind us that it is very hard to win lawsuits alleging coercion. 

However, we do know a little more about the line between permissible government persuasion and impermissible coercion from a different jawboning case, outside the social media context, that the Supreme Court also decided this year: NRA v. Vullo. In that case, the National Rifle Association alleged that the New York state agency that oversees the insurance industry threatened insurance companies with enforcement actions if they continued to offer coverage to the NRA. The Supreme Court endorsed a multi-factored test that many of the lower courts had adopted to answer the ultimate question in jawboning cases: did the plaintiff “plausibly allege conduct that, viewed in context, could be reasonably understood to convey a threat of adverse government action in order to punish or suppress the plaintiff ’s speech?” Those factors are: 1) word choice and tone, 2) the existence of regulatory authority (that is, the ability of the government speaker to actually carry out the threat), 3) whether the speech was perceived as a threat, and 4) whether the speech refers to adverse consequences.

Some Takeaways From These Three Sets of Cases

The O’Connor-Ratcliffe and Lindke cases about social media blocking looked at the government’s role as a social media user. The NetChoice cases about content moderation looked at government’s role as a regulator of social media platforms. And the Murthy case about jawboning looked at the government’s mixed role as a regulator and user.

Three key takeaways emerged from these three sets of cases (across five total cases):

First, internet users have a First Amendment right to speak on social media—whether by posting or commenting—and that right may be infringed when the government seeks to interfere with content moderation, but it will not be infringed by the independent decisions of the platforms themselves.

Second, the Supreme Court recognized that social media platforms routinely moderate users’ speech: they decide which posts each user sees and when and how they see it, they decide to amplify and recommend some posts and obscure others, and they are often guided in this process by their own community standards or similar editorial policies. The court moved beyond the idea that content moderation is largely passive and indifferent.

Third, the cases confirm that traditional First Amendment rules apply to social media. Thus, when government controls the comments section of a social media page, it has the same First Amendment obligations to those who wish to speak in those spaces as it does in offline spaces it controls, such as parks, public auditoriums, or city council meetings. And online platforms that edit and curate user speech according to their editorial standards have the same First Amendment rights as others who express themselves by selecting the speech of others, including art galleries, booksellers, newsstands, parade organizers, and editorial page editors.

Government-Mandated Age Verification

Case: Free Speech Coalition v. Paxton – PENDING

Last but not least, we filed an amicus brief urging the Supreme Court to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age (see our Year in Review post on age verification). Under HB 1181, passed in 2023, any website that Texas decides is composed of one-third or more of “sexual material harmful to minors” must collect age-verifying personal information from all visitors. We argued that the law places undue burdens on adults seeking to access lawful online speech. First, the law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law requires websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier, for example. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. The court’s decision could have major consequences for the freedom of adults to safely and anonymously access protected speech online.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It: 2024 in Review

Par : Aaron Mackey
24 décembre 2024 à 13:51

People’s ability to speak online, share ideas, and advocate for change are enabled by the countless online services that host everyone’s views.

Despite the central role these online services play in our digital lives, lawmakers and courts spent the last year trying to undermine a key U.S. law, Section 230, that enables services to host our speech. EFF was there to fight back on behalf of all internet users.

Section 230 (47 U.S.C. § 230) is not an accident. Congress passed the law in 1996 because it recognized that for users’ speech to flourish online, services that hosted their speech needed to be protected from legal claims based on any particular user’s speech. The law embodies the principle that everyone, including the services themselves, should be responsible for their own speech, but not the speech of others. This critical but limited legal protection reflects a careful balance by Congress, which at the time recognized that promoting more user speech outweighed the harm caused by any individual’s unlawful speech.

EFF helps thwart effort to repeal Section 230

Members of Congress introduced a bill in May this year that would have repealed Section 230 in 18 months, on the theory that the deadline would motivate lawmakers to come up with a different legal framework in the meantime. Yet the lawmakers behind the effort provided no concrete alternatives to Section 230, nor did they identify any specific parts of the law they believed needed to be changed. Instead, the lawmakers were motivated by their and the public’s justifiable dissatisfaction with the largest online services.

As we wrote at the time, repealing Section 230 would be a disaster for internet users and the small, niche online services that make up the diverse forums and communities that host speech about nearly every interest, religious and political persuasion, and topic. Section 230 protects bloggers, anyone who forwards an email, and anyone who reposts or otherwise recirculates the posts of other users. The law also protects moderators who remove or curate other users’ posts.

Moreover, repealing Section 230 would not have hurt the biggest online services, given that they have astronomical amounts of money and resources to handle the deluge of legal claims that would be filed. Instead, repealing Section 230 would have solidified the dominance of the largest online services. That’s why Facebook has long ran a campaign urging Congress to weaken Section 230 – a cynical effort to use the law to cement its dominance.

Thankfully, the bill did not advance, in part because internet users wrote to members of Congress objecting to the proposal. We hope lawmakers in 2025 put their energy toward ending Big Tech’s dominance by enacting a meaningful and comprehensive consumer data privacy law, or pass laws that enable greater interoperability and competition between social media services. Those efforts will go a long way toward ending Big Tech’s dominance without harming users’ online speech.

EFF stands up for users’ speech in courts

Congress was not the only government branch that sought to undermine Section 230 in the past year. Two different courts issued rulings this year that will jeopardize people’s ability to read other people’s posts and make use of basic features of online services that benefit all users.

In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit issued a deeply confused opinion, ruling that Section 230 does not apply to the automated system TikTok uses to recommend content to users. The court reasoned that because online services have a First Amendment right to decide how to present their users’ speech, TikTok’s decisions to recommend certain content reflects its own speech and thus Section 230’s protections do not apply.

We filed a friend-of-the-court brief in support of TikTok’s request for the full court to rehear the case, arguing that the decision was wrong on both the First Amendment and Section 230. We also pointed out how the ruling would have far-reaching implications for users’ online speech. The court unfortunately denied TikTok’s rehearing request, and we are waiting to see whether the service will ask the Supreme Court to review the case.

In Neville v. Snap, Inc., a California trial court refused to apply Section 230 in a lawsuit that claims basic features of the service, such as disappearing messages, “Stories,” and the ability to befriend mutual acquaintances, amounted to defectively designed products. The trial court’s ruling departs from a long line of other court decisions that ruled that these claims essentially try to plead around Section 230 by claiming that the features are the problem, rather than the illegal content that users created with a service’s features.

We filed a friend-of-the-court brief in support of Snap’s effort to get a California appellate court to overturn the trial court’s decision, arguing that the ruling threatens the ability for all internet users to rely on basic features of a given service. Because if a platform faces liability for a feature that some might misuse to cause harm, the platform is unlikely to offer that feature to users, despite the fact that the majority of people using the feature for legal and expressive purposes. Unfortunately, the appellate court denied Snap’s petition in December, meaning the case continues before the trial court.

EFF supports effort to empower users to customize their online experiences

While lawmakers and courts are often focused on Section 230’s protections for online services, relatively little attention has been paid to another provision in the law that protects those who make tools that allow users to customize their experiences online. Yet Congress included this protection precisely because it wanted to encourage the development of software that people can use to filter out certain content they’d rather not see or otherwise change how they interact with others online.

That is precisely the goal of a tool being developed by Ethan Zuckerman, a professor at the University of Massachusetts Amherst, known as Unfollow Everything 2.0. The browser extension would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

Zuckerman filed a lawsuit against Facebook seeking a court ruling that Unfollow Everything 2.0 was immune from legal claims from Facebook under Section 230(c)(2)(B). EFF filed a friend-of-the-court brief in support, arguing that Section 230’s user-empowerment tool immunity is unique and incentivizes the development of beneficial tools for users, including traditional content filtering, tailoring content on social media to a user’s preferences, and blocking unwanted digital trackers to protect a user’s privacy.

The district court hearing the case unfortunately dismissed the case, but its ruling did not reach the merits of whether Section 230 protected Unfollow Everything 2.0. The court gave Zuckerman an opportunity to re-file the case, and we will continue to support his efforts to build user-empowering tools.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Defending Encryption in the U.S. and Abroad: 2024 in Review

Par : Joe Mullin
23 décembre 2024 à 11:05

EFF supporters get that strong encryption is tied to one of our most basic rights: the right to have a private conversation. In the digital world, privacy is impossible without strong encryption. 

That’s why we’ve always got an eye out for attacks on encryption. This year, we pushed back—successfully—against anti-encryption laws proposed in the U.S., the U.K. and the E.U. And we had a stark reminder of just how dangerous backdoor access to our communications can be. 

U.S. Bills Pushing Mass File-Scanning Fail To Advance

The U.S. Senate’s EARN IT Bill is a wrongheaded proposal that would push companies away from using encryption and towards scanning our messages and photos. There’s no reason to enact such a proposal, which technical experts agree would turn our phones into bugs in our pockets

We were disappointed when EARN IT was voted out of committee last year, even though several senators did make clear they wanted to see additional changes before they support the bill. Since then, however, the bill has gone nowhere. That’s because so many people, including more than 100,000 EFF supporters, have voiced their opposition. 

People increasingly understand that encryption is vital to our security and privacy. And when politicians demand that tech companies install dangerous scanning software whether users like it or not, it’s clear to us all that they are attacking encryption, no matter how much obfuscation takes place. 

EFF has long encouraged companies to adopt policies that support encryption, privacy and security by default. When companies do the right thing, EFF supporters will side with them. EFF and other privacy advocates pushed Meta for years to make end-to-end encryption the default option in Messenger. When Meta implemented the change, they were sued by Nevada’s Attorney General. EFF filed a brief in that case arguing that Meta should not be forced to make its systems less secure. 

UK Backs Off Encryption-Breaking Language 

In the U.K., we fought against the wrongheaded Online Safety Act, which included language that would have let the U.K. government strongarm companies away from using encryption. After pressure from EFF supporters and others, the U.K. government gave last-minute assurances that the bill wouldn’t be applied to encrypted messages. The U.K. agency in charge of implementing the Online Safety Act, Ofcom, has now said that the Act will not apply to end-to-end encrypted messages. That’s an important distinction, and we have urged Ofcom to make that even more clear in its written guidance. 

EU Residents Do Not Want “Chat Control” 

Some E.U. politicians have sought to advance a message-scanning bill that was even more extreme than the U.S. anti-encryption bills. We’re glad to say the EU proposal, which has been dubbed “Chat Control” by its opponents, has also been stalled because of strong opposition. 

Even though the European Parliament last year adopted a compromise proposal that would protect our rights to encrypted communications, a few key member states at the EU Council spent much of 2024 pushing forward the old, privacy-smashing version of Chat Control. But they haven’t advanced. In a public hearing earlier this month, 10 EU member states, including Germany and Poland, made clear they would not vote for this proposal. 

Courts in the E.U., like the public at large, increasingly recognize that online private communications are human rights, and the encryption required to facilitate them cannot be grabbed away. The European Court of Human Rights recognized this in a milestone judgment earlier this year, Podchasov v. Russia, which specifically held that weakening encryption put at risk the human rights of all internet users. 

A Powerful Reminder on Backdoors

All three of the above proposals are based on a flawed idea: that it’s possible to give some form of special access to peoples’ private data that will never be exploited by a bad actor. But that’s never been true–there is no backdoor that works only for the “good guys.” 

In October, the U.S. public learned about a major breach of telecom systems stemming from Salt Typhoon, a sophisticated Chinese-government backed hacking group. This hack infiltrated the same systems that major ISPs like Verizon, AT&T and Lumen Technologies had set up for U.S. law enforcement and intelligence agencies to get “lawful access” to user data. It’s still unknown how extensive the damage is from this hack, which included people under surveillance by U.S. agencies but went far beyond that. 

If there’s any upside to a terrible breach like Salt Typhoon, it’s that it is waking up some officials to understand that encryption is vital to both individual and national security. Earlier this month, a top U.S. cybersecurity chief said “encryption is your friend,” making a welcome break with the messaging we’ve seen over the years at EFF.  Unfortunately, other agencies, including the FBI, continue to push the idea that strong encryption can be coupled with easy access by law enforcement. 

Whatever happens, EFF will continue to stand up for our right to use encryption to have secure and private online communications.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

2024 Year in Review

Par : Cindy Cohn
23 décembre 2024 à 10:50

It is our end-of-year tradition at EFF to look back at the last 12 months of digital rights. This year, the number and diversity of our reflections attest that 2024 was a big year. 

If there is something uniting all the disparate threads of work EFF has done this year, it is this: that law and policy should be careful, precise, practical, and technologically neutral. We do not care if a cop is using a glass pressed against your door or the most advanced microphone: they need a warrant.  

For example, much of the public discourse this year was taken up by generative AI. It seemed that this issue was a Rorschach test for everyone’s anxieties about technology - be they privacy, replacement of workers, surveillance, or intellectual property. Ultimately, it matters little what the specific technology is: whenever technology is being used against our rights, EFF will oppose that use. It’s a future-proof way of protecting us. If we have privacy protections, labor protections, and protections against government invasions, then it does not matter what technology takes over the public imagination, we will have recourse against its harms. 

But AI was only one of the issues we took on this past year. We’ve worked on ensuring that the EU’s new rules regarding large online platforms respect human rights. We’ve filed countless briefs in support of free expression online and represented plaintiffs in cases where bad actors have sought to silence them, including citizen journalists who were targeted for posting clips of city council meetings online.  

With your help, we have let the United States Congress know that its citizens are for protecting the free press and against laws that would cut kids off from vital sources of information. We’ve spoken to legislators, reporters, and the public to make sure everyone is informed about the benefits and dangers of new technologies, new proposed laws, and legal precedent.  

Even all of that does not capture everything we did this year. And we did not—indeed, we cannot—do it without you. Your support keeps the lights on and ensures we are not speaking just for EFF as an organization but for our thousands of tireless members. Thank you, as always.  

We will update this page with new stories about digital rights in 2024 every day between now and the new year. 

Defending Encryption in the U.S. and Abroad
EFF in the Press
The U.S. Supreme Court Continues its Foray into Free Speech and Tech
The Atlas of Surveillance Expands Its Data on Police Surveillance Technology
EFF Continued to Champion Users’ Online Speech and Fought Efforts to Curtail It
We Stood Up for Access to the Law and Congress Listened
Police Surveillance in San Francisco
Fighting For Progress On Patents
Celebrating Digital Freedom with EFF Supporters
Surveillance Self-Defense
EU Tech Regulation—Good Intentions, Unclear Consequences

À partir d’avant-hierElectronic Frontier Foundation

EFF Tells Appeals Court To Keep Copyright’s Fair Use Rules Broad And Flexible

Par : Joe Mullin
21 décembre 2024 à 12:05

It’s critical that copyright be balanced with limitations that support users’ rights, and perhaps no limitation is more important than fair use. Critics, humorists, artists, and activists all must have rights to re-use and re-purpose source material, even when it’s copyrighted. 

Yesterday, EFF weighed in on another case that could shape the future of our fair use rights. In Sedlik v. Von Drachenberg, a Los Angeles tattoo artist created a tattoo based on a well-known photograph of Miles Davis taken by photographer Jeffrey Sedlik. A jury found that Von Drachenberg, the tattoo artist, did not infringe the photographer’s copyright because her version was different from the photo; it didn’t meet the legal threshold of “substantially similar.” After the trial, the judge in the case considered other arguments brought by Sedlik after the trial and upheld the jury’s findings. 

On appeal, Sedlik has made arguments that, if upheld, could narrow fair use rights for everyone. The appeal brief suggests that only secondary users who make “targeted” use of a copyrighted work have strong fair use defenses, relying on an incorrect reading of the Supreme Court’s decision in Andy Warhol Foundation v. Goldsmith

Fair users select among various alternatives, for both aesthetic and practical reasons.

Such a reading would upend decades of Supreme Court precedent that makes it clear that “targeted” fair uses don’t get any special treatment as opposed to “untargeted” uses. As made clear in Warhol, the copying done by fair users must simply be “reasonably necessary” to achieve a new purpose. The principle of protecting new artistic expressions and new innovations is what led the Supreme Court to protect video cassette recording as fair use in 1984. It also contributed to the 2021 decision in Oracle v. Google, which held that Google’s copying of computer programming conventions created for desktop computers, in order to make it easier to design for modern smartphones, was a type of fair use. 

Sedlik argues that if a secondary user could have chosen another work, this means they did not “target” the original work, and thus the user should have a lessened fair use case. But that has never been the rule. As the Supreme Court explained, Warhol could have created art about a product other than Campbell’s Soup; but his choice to copy the famous Campbell’s logo was fully justified because it was “well known to the public, designed to be reproduced, and a symbol of an everyday item for mass consumption.” 

Fair users always select among various alternatives, for both aesthetic and practical reasons. A film professor might know of several films that expertly demonstrate a technique, but will inevitably choose just one to show in class. A news program alerting viewers to developing events may have access to many recordings of the event from different sources, but will choose just one, or a few, based on editorial judgments. Software developers must make decisions about which existing software to analyze or to interoperate with in order to build on existing technology. 

The idea of penalizing these non-“targeted” fair uses would lead to absurd results, and we urge the 9th Circuit to reject this argument. 

Finally, Sedlik also argues that the tattoo artist’s social media posts are necessarily “commercial” acts, which would push the tattoo art further away from fair use. Artists’ use of social media to document their processes and work has become ubiquitous, and such an expansive view of commerciality would render the concept meaningless. That’s why multiple appellate courts have already rejected such a view; the 9th Circuit should do so as well. 

In order for innovation and free expression to flourish in the digital age, fair use must remain a flexible rule that allows for diverse purposes and uses. 

Further Reading: 

  • EFF Amicus Brief in Sedlik v. Von Drachenberg 

Ninth Circuit Gets It: Interoperability Isn’t an Automatic First Step to Liability

20 décembre 2024 à 14:26

A federal appeals court just gave software developers, and users, an early holiday present, holding that software updates aren’t necessarily “derivative,” for purposes of copyright law, just because they are designed to interoperate the software they update.

This sounds kind of obscure, so let’s cut through the legalese. Lots of developers build software designed to interoperate with preexisting works. This kind of interoperability is crucial to innovation, particularly in a world where a small number of companies control so many essential tools and platforms. If users want to be able to repair, improve, and secure their devices, they must be able to rely on third parties to help. Trouble is, Big Tech companies want to be able to control (and charge for) every possible use of the devices and software they “sell” you – and they won’t hesitate to use the law to enforce that control. 

Courts shouldn’t assist, but unfortunately a federal district court did just that in the latest iteration of Oracle v. Rimini. Rimini provides support to improve the use and security of Oracle products, so customers don’t have to depend entirely on Oracle itself . Oracle doesn’t want this kind of competition, so it sued Rimini for copyright infringement, arguing that a software update Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though the update didn’t use any of Oracle’s copyrightable code. Derivative works are typically things like a movie based on a novel, or a translation of that novel. Here, the only “derivative” aspect was that Rimini’s code was designed to interact with Oracle’s code.  
 
Unfortunately, the district court initially sided with Oracle, setting a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is substantially similar to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those innovations.  

Rimini appealed to the Ninth Circuit, on multiple grounds. EFF, along with a diverse group of stakeholders representing consumers, small businesses, software developers, security researchers, and the independent repair community, filed an amicus brief in support explaining that the district court ruling on interoperability was not just bad policy, but also bad law.  

 The Ninth Circuit agreed: 

In effect, the district court adopted an “interoperability” test for derivative works—if a product can only interoperate with a preexisting copyrighted work, then it must be derivative. But neither the text of the Copyright Act nor our precedent supports this interoperability test for derivative works. 

 The court goes on to give a primer on the legal definition of derivative work, but the key point is this: a work is only derivative if it “substantially incorporates the other work.”

Copyright already reaches far too broadly, giving rightsholders extraordinary power over how we use everything from music to phones to televisions. This holiday season, we’re raising a glass to the judges who sensibly reined that power in. 

Customs & Border Protection Fails Baseline Privacy Requirements for Surveillance Technology

Par : Dave Maass
20 décembre 2024 à 13:47

U.S. Customs and Border Protection (CBP) has failed to address six out of six main privacy protections for three of its border surveillance programs—surveillance towers, aerostats, and unattended ground sensors—according to a new assessment by the Government Accountability Office (GAO).

In the report, GAO compared the policies for these technologies against six of the key Fair Information Practice Principles that agencies are supposed to use when evaluating systems and processes that may impact privacy, as dictated by both Office of Management and Budget guidance and the Department of Homeland Security's own rules.

A chart of the various technologies and how they comply with FIPS

These include:

  • Data collection. "DHS should collect only PII [Personally Identifiable Information] that is directly relevant and necessary to accomplish the specified purpose(s)."
  • Purpose specification. "DHS should specifically articulate the purpose(s) for which the PII is intended to be used."
  • Information sharing. "Sharing PII outside the department should be for a purpose compatible with the purpose for which the information was collected."
  • Data security. "DHS should protect PII through appropriate security safeguards against risks such as loss, unauthorized access or use, destruction, modification, or unintended or inappropriate disclosure."
  • Data retention. "DHS should only retain PII for as long as is necessary to fulfill the specified purpose(s)."
  • Accountability. "DHS should be accountable for complying with these principles, including by auditing the actual use of PII to demonstrate compliance with these principles and all applicable privacy protection requirements."

These baseline privacy elements for the three border surveillance technologies were not addressed in any "technology policies, standard operating procedures, directives, or other documents that direct a user in how they are to use a Technology," according to GAO's review.

CBP operates hundreds of surveillance towers along both the northern and southern borders, some of which are capable of capturing video more than seven miles away. The agency has six large aerostats (essentially tethered blimps) that use radar along the southern border, with others stationed in the Florida Keys and Puerto Rico. The agency also operates a series of smaller aerostats that stream video in the Rio Grande Valley of Texas, with the newest one installed this fall in southeastern New Mexico. And the report notes deficiencies with CBP's linear ground detection system, a network of seismic sensors and cameras that are triggered by movement or footsteps.

The GAO report underlines EFF's concerns that the privacy of people who live and work in the borderlands is violated when federal agencies deploy militarized, high-tech programs to confront unauthorized border crossings. The rights of border communities are too often treated as acceptable collateral damage in pursuit of border security.

CBP defended its practices by saying that it does, to some extent, address FIPS in its Privacy Impact Assessments, documents written for public consumption. GAO rejected this claim, saying that these assessments are not adequate in instructing agency staff on how to protect privacy when deploying the technologies and using the data that has been collected.

In its recommendations, the GAO calls on the CBP Commissioner to "require each detection, observation, and monitoring technology policy to address the privacy protections in the Fair Information Practice Principles." But EFF calls on Congress to hold CBP to account and stop approving massive spending on border security technologies that the agency continues to operate irresponsibly.

The Breachies 2024: The Worst, Weirdest, Most Impactful Data Breaches of the Year

Every year, countless emails hit our inboxes telling us that our personal information was accessed, shared, or stolen in a data breach. In many cases, there is little we can do. Most of us can assume that at least our phone numbers, emails, addresses, credit card numbers, and social security numbers are all available somewhere on the internet.

But some of these data breaches are more noteworthy than others, because they include novel information about us, are the result of particularly noteworthy security flaws, or are just so massive they’re impossible to ignore. For that reason, we are introducing the Breachies, a series of tongue-in-cheek “awards” for some of the most egregious data breaches of the year.

If these companies practiced a privacy first approach and focused on data minimization, only collecting and storing what they absolutely need to provide the services they promise, many data breaches would be far less harmful to the victims. But instead, companies gobble up as much as they can, store it for as long as possible, and inevitably at some point someone decides to poke in and steal that data.

Once all that personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks isn’t just a minor annoyance: research shows it can cause psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

This year we’ve got some real stinkers, ranging from private health information to—you guessed it—credit cards and social security numbers.

The Winners

The Just Stop Using Tracking Tech Award: Kaiser Permanente

In one of the year's most preventable breaches, the healthcare company Kaiser Permanente exposed 13 million patients’ information via tracking code embedded in its website and app. This tracking code transmitted potentially sensitive medical information to Google, Microsoft, and X (formerly known as Twitter). The exposed information included patients’ names, terms they searched in Kaiser’s Health Encyclopedia, and how they navigated within and interacted with Kaiser’s website or app.

The most troubling aspect of this breach is that medical information was exposed not by a sophisticated hack, but through widely used tracking technologies that Kaiser voluntarily placed on its website. Kaiser has since removed the problematic code, but tracking technologies are rampant across the internet and on other healthcare websites. A 2024 study found tracking technologies sharing information with third parties on 96% of hospital websites. Websites usually use tracking technologies to serve targeted ads. But these same technologies give advertisers, data brokers, and law enforcement easy access to details about your online activity.

While individuals can protect themselves from online tracking by using tools like EFF’s Privacy Badger, we need legislative action to make online privacy the norm for everyone. EFF advocates for a ban on online behavioral advertising to address the primary incentive for companies to use invasive tracking technology. Otherwise, we’ll continue to see companies voluntarily sharing your personal data, then apologizing when thieves inevitably exploit a vulnerability in these tracking systems.

Head back to the table of contents.

The Most Impactful Data Breach for 90s Kids Award: Hot Topic

If you were in middle or high school any time in the 90s you probably have strong memories of Hot Topic. Baby goths and young punk rockers alike would go to the mall, get an Orange Julius and greasy slice of Sbarro pizza, then walk over to Hot Topic to pick up edgy t-shirts and overpriced bondage pants (all the while debating who was the biggest poser and which bands were sellouts, of course). Because of the fundamental position Hot Topic occupies in our generation’s personal mythology, this data breach hits extra hard.

In November 2024, Have I Been Pwned reported that Hot Topic and its subsidiary Box Lunch suffered a data breach of nearly 57 million data records. A hacker using the alias “Satanic” claimed responsibility and posted a 730 GB database on a hacker forum with a sale price of $20,000. The compromised data about approximately 54 million customers reportedly includes: names, email addresses, physical addresses, phone numbers, purchase history, birth dates, and partial credit card details. Research by Hudson Rock indicates that the data was compromised using info stealer malware installed on a Hot Topic employee’s work computer. “Satanic” claims that the original infection stems from the Snowflake data breach (another Breachie winner); though that hasn’t been confirmed because Hot Topic has still not notified customers, nor responded to our request for comment.

Though data breaches of this scale are common, it still breaks our little goth hearts, and we’d prefer stores did a better job of securing our data. Worse, Hot Topic still hasn’t publicly acknowledged this breach, despite numerous news reports. Perhaps Hot Topic was the real sellout all along. 

Head back to the table of contents.

The Only Stalkers Allowed Award: mSpy

mSpy, a commercially-available mobile stalkerware app owned by Ukrainian-based company Brainstack, was subject to a data breach earlier this year. More than a decade’s worth of information about the app’s customers was stolen, as well as the real names and email addresses of Brainstack employees.

The defining feature of stalkerware apps is their ability to operate covertly and trick users into believing that they are not being monitored. But in reality, applications like mSpy allow whoever planted the stalkerware to remotely view the contents of the victim’s device in real time. These tools are often used to intimidate, harass, and harm victims, including by stalkers and abusive (ex) partners. Given the highly sensitive data collected by companies like mSpy and the harm to targets when their data gets revealed, this data breach is another example of why stalkerware must be stopped

Head back to the table of contents.

The I Didn’t Even Know You Had My Information Award: Evolve Bank

Okay, are we the only ones  who hadn’t heard of Evolve Bank? It was reported in May that Evolve Bank experienced a data breach—though it actually happened all the way back in February. You may be thinking, “why does this breach matter if I’ve never heard of Evolve Bank before?” That’s what we thought too!

But here’s the thing: this attack affected a bunch of companies you have heard of, like Affirm (the buy now, pay later service), Wise (the international money transfer service), and Mercury Bank (a fintech company). So, a ton of services use the bank, and you may have used one of those services. It’s been reported that 7.6 million Americans were affected by the breach, with most of the data stolen being customer information, including social security numbers, account numbers, and date of birth.

The small bright side? No customer funds were accessed during the breach. Evolve states that after the breach they are doing some basic things like resetting user passwords and strengthening their security infrastructure

Head back to the table of contents.

The We Told You So Award: AU10TIX

AU10TIX is an “identity verification” company used by the likes of TikTok and X to confirm that users are who they claim to be. AU10TIX and companies like it collect and review sensitive private documents such as driver’s license information before users can register for a site or access some content.

Unfortunately, there is growing political interest in mandating identity or age verification before allowing people to access social media or adult material. EFF and others oppose these plans because they threaten both speech and privacy. As we said in 2023, verification mandates would inevitably lead to more data breaches, potentially exposing government IDs as well as information about the sites that a user visits.

Look no further than the AU10TIX breach to see what we mean. According to a report by 404 Media in May, AU10TIX left login credentials exposed online for more than a year, allowing access to very sensitive user data.

404 Media details how a researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents.” This included “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents.

The AU10TIX breach did not seem to lead to exposure beyond what the researcher showed was possible. But AU10TIX and other companies must do a better job at locking down user data. More importantly, politicians must not create new privacy dangers by requiring identity and age verification.

If age verification requirements become law, we’ll be handing a lot of our sensitive information over to companies like AU10TIX. This is the first We Told You So Breachie award, but it likely won’t be the last. 

Head back to the table of contents.

The Why We’re Still Stuck on Unique Passwords Award: Roku

In April, Roku announced not yet another new way to display more ads, but a data breach (its second of the year) where 576,000 accounts were compromised using a “credential stuffing attack.” This is a common, relatively easy sort of automated attack where thieves use previously leaked username and password combinations (from a past data breach of an unrelated company) to get into accounts on a different service. So, if say, your username and password was in the Comcast data breach in 2015, and you used the same username and password on Roku, the attacker might have been able to get into your account. Thankfully, less than 400 Roku accounts saw unauthorized purchases, and no payment information was accessed.

But the ease of this sort of data breach is why it’s important to use unique passwords everywhere. A password manager, including one that might be free on your phone or browser, makes this much easier to do. Likewise, credential stuffing illustrates why it’s important to use two-factor authentication. After the Roku breach, the company turned on two-factor authentication for all accounts. This way, even if someone did get access to your account password, they’d need that second code from another device; in Roku’s case, either your phone number or email address.

Head back to the table of contents.

The Listen, Security Researchers are Trying to Help Award: City of Columbus

In August, the security researcher David Ross Jr. (also known as Connor Goodwolf) discovered that a ransomware attack against the City of Columbus, Ohio, was much more serious than city officials initially revealed. After the researcher informed the press and provided proof, the city accused him of violating multiple laws and obtained a gag order against him.

Rather than silencing the researcher, city officials should have celebrated him for helping victims understand the true extent of the breach. EFF and security researchers know the value of this work. And EFF has a team of lawyers who help protect researchers and their work. 

Here is how not to deal with a security researcher: In July, Columbus learned it had suffered a ransomware attack. A group called Rhysida took responsibility. The city did not pay the ransom, and the group posted some of the stolen data online. The mayor announced the stolen data was “encrypted or corrupted,” so most of it was unusable. Later, the researcher, David Ross, helped inform local news outlets that in fact the breach did include usable personal information on residents. He also attempted to contact the city. Days later, the city offered free credit monitoring to all of its residents and confirmed that its original announcement was inaccurate.

Unfortunately, the city also filed a lawsuit, and a judge signed a temporary restraining order preventing the researcher from accessing, downloading, or disseminating the data. Later, the researcher agreed to a more limited injunction. The city eventually confirmed that the data of hundreds of thousands of people was stolen in the ransomware attack, including drivers licenses, social security numbers, employee information, and the identities of juvenile victims, undercover police officers, and confidential informants.

Head back to the table of contents.

The Have I Been Pwned? Award: Spoutible

The Spoutible breach has layers—layers of “no way!” that keep revealing more and more amazing little facts the deeper one digs.

It all started with a leaky API. On a per-user basis, it didn’t just return the sort of information you’d expect from a social media platform, but also the user’s email, IP address, and phone number. No way! Why would you do that?

But hold on, it also includes a bcrypt hash of their password. No way! Why would you do that?!

Ah well, at least they offer two-factor authentication (2FA) to protect against password leakages, except… the API was also returning the secret used to generate the 2FA OTP as well. No way! So, if someone had enabled 2FA it was immediately rendered useless by virtue of this field being visible to everyone.

However, the pièce de resistance comes with the next field in the API: the “em_code.” You know how when you do a password reset you get emailed a secret code that proves you control the address and can change the password? That was the code! No way!

-EFF thanks guest author Troy Hunt for this contribution to the Breachies.

Head back to the table of contents.

The Reporting’s All Over the Place Award: National Public Data

In January 2024, there was almost no chance you’d have heard of a company called National Public Data. But starting in April, then ramping up in June, stories revealed a breach affecting the background checking data broker that included names, phone numbers, addresses, and social security numbers of at least 300 million people. By August, the reported number ballooned to 2.9 billion people. In October, National Public Data filed for bankruptcy, leaving behind nothing but a breach notification on its website.

But what exactly was stolen? The evolving news coverage has raised more questions than it has answered. Too bad National Public Data has failed to tell the public more about the data that the company failed to secure.

One analysis found that some of the dataset was inaccurate, with a number of duplicates; also, while there were 137 million email addresses, they weren’t linked to social security numbers. Another analysis had similar results. As for social security numbers, there were likely somewhere around 272 million in the dataset. The data was so jumbled that it had names matched to the wrong email or address, and included a large chunk of people who were deceased. Oh, and that 2.9 billion number? That was the number of rows of data in the dataset, not the number of individuals. That 2.9 billion people number appeared to originate from a complaint filed in Florida.

Phew, time to check in with Count von Count on this one, then.

How many people were truly affected? It’s difficult to say for certain. The only thing we learned for sure is that starting a data broker company appears to be incredibly easy, as NPD was owned by a retired sheriff’s deputy and a small film studio and didn’t seem to be a large operation. While this data broker got caught with more leaks than the Titanic, hundreds of others are still out there collecting and hoarding information, and failing to watch out for the next iceberg.

Head back to the table of contents.

The Biggest Health Breach We’ve Ever Seen Award: Change Health

In February, a ransomware attack on Change Healthcare exposed the private health information of over 100 million people. The company, which processes 40% of all U.S. health insurance claims, was forced offline for nearly a month. As a result, healthcare practices nationwide struggled to stay operational and patients experienced limits on access to care. Meanwhile, the stolen data poses long-term risks for identity theft and insurance fraud for millions of Americans—it includes patients’ personal identifiers, health diagnoses, medications, insurance details, financial information, and government identity documents.

The misuse of medical records can be harder to detect and correct that regular financial fraud or identity theft. The FTC recommends that people at risk of medical identity theft watch out for suspicious medical bills or debt collection notices.

The hack highlights the need for stronger cybersecurity in the healthcare industry, which is increasingly targeted by cyberattacks. The Change Healthcare hackers were able to access a critical system because it lacked two-factor authentication, a basic form of security.

To make matters worse, Change Healthcare’s recent merger with Optum, which antitrust regulators tried and failed to block, even further centralized vast amounts of sensitive information. Many healthcare providers blamed corporate consolidation for the scale of disruption. As the former president of the American Medical Association put it, “When we have one option, then the hackers have one big target… if they bring that down, they can grind U.S. health care to a halt.” Privacy and competition are related values, and data breach and monopoly are connected problems.

Head back to the table of contents.

The There’s No Such Thing As Backdoors for Only “Good Guys” Award: Salt Typhoon

When companies build backdoors into their services to provide law enforcement access to user data, these backdoors can be exploited by thieves, foreign governments, and other adversaries. There are no methods of access that are magically only accessible to “good guys.” No security breach has demonstrated that more clearly than this year’s attack by Salt Typhoon, a Chinese government-backed hacking group.

Internet service providers generally have special systems to provide law enforcement and intelligence agencies access to user data. They do that to comply with laws like CALEA, which require telecom companies to provide a means for “lawful intercepts”—in other words, wiretaps.

The Salt Typhoon group was able to access the powerful tools that in theory have been reserved for U.S. government agencies. The hackers infiltrated the nation’s biggest telecom networks, including Verizon, AT&T, and others, and were able to target their surveillance based on U.S. law enforcement wiretap requests. Breaches elsewhere in the system let them listen in on calls in real time. People under U.S. surveillance were clearly some of the targets, but the hackers also targeted both 2024 presidential campaigns and officials in the State Department. 

While fewer than 150 people have been identified as targets so far, the number of people who were called or texted by those targets run into the “millions,” according to a Senator who has been briefed on the hack. What’s more, the Salt Typhoon hackers still have not been rooted out of the networks they infiltrated.

The idea that only authorized government agencies would use such backdoor access tools has always been flawed. With sophisticated state-sponsored hacking groups operating across the globe, a data breach like Salt Typhoon was only a matter of time. 

Head back to the table of contents.

The Snowballing Breach of the Year Award: Snowflake

Thieves compromised the corporate customer accounts for U.S. cloud analytics provider Snowflake. The corporate customers included AT&T, Ticketmaster, Santander, Neiman Marcus, and many others: 165 in total.

This led to a massive breach of billions of data records for individuals using these companies. A combination of infostealer malware infections on non-Snowflake machines as well as weak security used to protect the affected accounts allowed the hackers to gain access and extort the customers. At the time of the hack, April-July of this year, Snowflake was not requiring two-factor authentication, an account security measure which could have provided protection against the attacks. A number of arrests were made after security researchers uncovered the identities of several of the threat actors.

But what does Snowflake do? According to their website, Snowflake “is a cloud-based data platform that provides data storage, processing, and analytic solutions.” Essentially, they store and index troves of customer data for companies to look at. And the larger the amount of data stored, the bigger the target for malicious actors to use to put leverage on and extort those companies. The problem is the data is on all of us. In the case of Snowflake customer AT&T, this includes billions of call and text logs of its customers, putting individuals’ sensitive data at risk of exposure. A privacy-first approach would employ techniques such as data minimization and either not collect that data in the first place or shorten the retention period that the data is stored. Otherwise it just sits there waiting for the next breach.

Head back to the table of contents.

Tips to Protect Yourself

Data breaches are such a common occurrence that it’s easy to feel like there’s nothing you can do, nor any point in trying. But privacy isn’t dead. While some information about you is almost certainly out there, that’s no reason for despair. In fact, it’s a good reason to take action.

There are steps you can take right now with all your online accounts to best protect yourself from the the next data breach (and the next, and the next):

  • Use unique passwords on all your online accounts. This is made much easier by using a password manager, which can generate and store those passwords for you. When you have a unique password for every website, a data breach of one site won’t cascade to others.
  • Use two-factor authentication when a service offers it. Two-factor authentication makes your online accounts more secure by requiring additional proof (“factors”) alongside your password when you log in. While two-factor authentication adds another step to the login process, it’s a great way to help keep out anyone not authorized, even if your password is breached.
  • Freeze your credit. Many experts recommend freezing your credit with the major credit bureaus as a way to protect against the sort of identity theft that’s made possible by some data breaches. Freezing your credit prevents someone from opening up a new line of credit in your name without additional information, like a PIN or password, to “unfreeze” the account. This might sound absurd considering they can’t even open bank accounts, but if you have kids, you can freeze their credit too.
  • Keep a close eye out for strange medical bills. With the number of health companies breached this year, it’s also a good idea to watch for healthcare fraud. The Federal Trade Commission recommends watching for strange bills, letters from your health insurance company for services you didn’t receive, and letters from debt collectors claiming you owe money. 

Head back to the table of contents.

(Dis)Honorable Mentions

By one report, 2023 saw over 3,000 data breaches. The figure so far this year is looking slightly smaller, with around 2,200 reported through the end of the third quarter. But 2,200 and counting is little comfort.

We did not investigate every one of these 2,000-plus data breaches, but we looked at a lot of them, including the news coverage and the data breach notification letters that many state Attorney General offices host on their websites. We can’t award the coveted Breachie Award to every company that was breached this year. Still, here are some (dis)honorable mentions:

ADT, Advance Auto Parts, AT&T, AT&T (again), Avis, Casio, Cencora, Comcast, Dell, El Salvador, Fidelity, FilterBaby, Fortinet, Framework, Golden Corral, Greylock, Halliburton, HealthEquity, Heritage Foundation, HMG Healthcare, Internet Archive, LA County Department of Mental Health, MediSecure, Mobile Guardian, MoneyGram, muah.ai, Ohio Lottery, Omni Hotels, Oregon Zoo, Orrick, Herrington & Sutcliffe, Panda Restaurants, Panera, Patelco Credit Union, Patriot Mobile, pcTattletale, Perry Johnson & Associates, Roll20, Santander, Spytech, Synnovis, TEG, Ticketmaster, Twilio, USPS, Verizon, VF Corp, WebTPA.

What now? Companies need to do a better job of only collecting the information they need to operate, and properly securing what they store. Also, the U.S. needs to pass comprehensive privacy protections. At the very least, we need to be able to sue companies when these sorts of breaches happen (and while we’re at it, it’d be nice if we got more than $5.21 checks in the mail). EFF has long advocated for a strong federal privacy law that includes a private right of action.

Saving the Internet in Europe: Defending Free Expression

19 décembre 2024 à 13:26

This post is part two in a series of posts about EFF’s work in Europe. Read about how and why we work in Europe here. 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe. 

EFF’s approach to free speech

The global spread of Internet access and digital services promised a new era of freedom of expression, where everyone could share and access information, speak out and find an audience without relying on gatekeepers and make, tinker with and share creative works.  

Everyone should have the right to express themselves and share ideas freely. Various European countries have experienced totalitarian regimes and extensive censorship in the past century, and as a result, many Europeans still place special emphasis on privacy and freedom of expression. These values are enshrined in the European Convention of Human Rights and the Charter of Fundamental Rights of the European Union – essential legal frameworks for the protection of fundamental rights.  

Today, as so much of our speech is facilitated by online platforms, there is an expectation, that they too respect fundamental rights. Through their terms of services, community guidelines or house rules, platforms get to unilaterally define what speech is permissible on their services. The enforcement of these rules can be arbitrary, untransparent and selective, resulting in the suppression of contentious ideas and minority voices.  

That’s why EFF has been fighting against both government threats to free expression and to hold tech companies accountable for grounding their content moderation practices in robust human rights frameworks. That entails setting out clear rules and standards for internal processes such as notifications and explanations to users when terms of services are enforced or changed. In the European Union, we have worked for decades to ensure that laws governing online platforms respect fundamental rights, advocated against censorship and spoke up on behalf of human rights defenders. 

What’s the Digital Services Act and why do we keep talking about it? 

For the past years, we have been especially busy addressing human rights concerns with the drafting and implementation of the DSA the Digital Services Act (DSA), the new law setting out the rules for online services in the European Union. The DSA covers most online services, ranging from online marketplaces like Amazon, search engines like Google, social networks like Meta and app stores. However, not all of its rules apply to all services – instead, the DSA follows a risk-based approach that puts the most obligations on the largest services that have the highest impact on users. All service providers must ensure that their terms of services respect fundamental rights, that users can get in touch with them easily, and that they report on their content moderation activities. Additional rules apply to online platforms: they must give users detailed information about content moderation decisions and the right to appeal and additional transparency obligations. They also have to provide some basic transparency into the functioning of their recommender systems and are not allowed to target underage users with personalized ads. The most stringent obligations apply to the largest online platforms and search engines, which have more than 45 million users in the EU. These companies, which include X, TikTok, Amazon, Google Search and Play, YouTube, and several porn platforms, must proactively assess and mitigate systemic risks related to the design, functioning and use of their service their services. These include risks to the exercise of fundamental rights, elections, public safety, civic discourse, the protection of minors and public health. This novel approach might have merit but is also cause for concern: Systemic risks are barely defined and could lead to restrictions of lawful speech, and measures to address these risks, for example age verification, have negative consequences themselves, like undermining users’ privacy and access to information.  

The DSA is an important piece of legislation to advance users’ rights and hold companies accountable, but it also comes with significant risks. We are concerned about the DSA’s requirement that service providers proactively share user data with law enforcement authorities and the powers it gives government agencies to request such data. We caution against the misuse of the DSA’s emergency mechanism and the expansion of the DSA’s systemic risks governance approach as a catch-all tool to crack down on undesired but lawful speech. Similarly, the appointment of trusted flaggers could lead to pressure on platforms to over remove content, especially as the DSA does not limit government authorities from becoming trusted flaggers.  

EFF has been advocating for lawmakers to take a measured approach that doesn’t undermine the freedom of expression. Even though we have been successful in avoiding some of the most harmful ideas, concerns remain, especially with regards to the politicization of the enforcement of the DSA and potential over-enforcement. That’s why we will keep a close eye on the enforcement of the DSA, ready to use all means at our disposal to push back against over-enforcement and to defend user rights.  

European laws often implicate users globally. To give non-European users a voice in Brussels, we have been facilitating the DSA Human Rights Alliance. The DSA HR Alliance is formed around the conviction that the DSA must adopt a human rights-based approach to platform governance and consider its global impact. We will continue building on and expanding the Alliance to ensure that the enforcement of the DSA doesn’t lead to unintended negative consequences and respects users’ rights everywhere in the world.

The UK’s Platform Regulation Legislation 

In parallel to the Digital Services Act, the UK has passed its own platform regulation, the Online Safety Act (OSA). Seeking to make the UK “the safest place in the world to be online,” the OSA will lead to a more censored, locked-down internet for British users. The Act empowers the UK government to undermine not just the privacy and security of UK residents, but internet users worldwide. 

Online platforms will be expected to remove content that the UK government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the UK as in the U.S. and elsewhere, people disagree sharply about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions.  

The OSA will also lead to harmful age-verification systems. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary.  

As Ofcom is starting to release their regulations and guidelines, we’re watching how the regulator plans to avoid these human rights pitfalls, and will continue any fighting insufficient efforts to protect speech and privacy online.  

Media freedom and plurality for everyone 

Another issue that we have been championing is media freedom. Similar to the DSA, the EU recently overhauled its rules for media services: the European Media Freedom Act (EMFA). In this context, we pushed back against rules that would have forced online platforms like YouTube, X, or Instagram to carry any content by media outlets. Intended to bolster media pluralism, making platforms host content by force has severe consequences: Millions of EU users can no longer trust that online platforms will address content violating community standards. Besides, there is no easy way to differentiate between legitimate media providers, and such that are known for spreading disinformation, such as government-affiliated Russia sites active in the EU. Taking away platforms' possibility to restrict or remove such content could undermine rather than foster public discourse.  

The final version of EMFA introduced a number of important safeguards but is still a bad deal for users: We will closely follow its implementation to ensure that the new rules actually foster media freedom and plurality, inspire trust in the media and limit the use of spyware against journalists.  

Exposing censorship and defending those who defend us 

Covering regulation is just a small part of what we do. Over the past years, we have again and again revealed how companies’ broad-stroked content moderation practices censor users in the name of fighting terrorism, and restrict the voices of LGBTQ folks, sex workers, and underrepresented groups.  

Going into 2025, we will continue to shed light on these restrictions of speech and will pay particular attention to the censorship of Palestinian voices, which has been rampant. We will continue collaborating with our allies in the Digital Intimacy Coalition to share how restrictive speech policies often disproportionally affect sex workers. We will also continue to closely analyze the impact of the increasing and changing use of artificial intelligence in content moderation.  

Finally, a crucial part of our work in Europe has been speaking out for those who cannot: human rights defenders facing imprisonment and censorship.  

Much work remains to be done. We have put forward comprehensive policy recommendations to European lawmakers and we will continue fighting for an internet where everyone can make their voice heard. In the next posts in this series, you will learn more about how we work in Europe to ensure that digital markets are fair, offer users choice and respect fundamental rights. 

We're Creating a Better Future for the Internet 🧑‍🏭

Par : Aaron Jue
19 décembre 2024 à 12:20

In the early years of the internet, website administrators had to face off with a burdensome and expensive process to deploy SSL certificates. But today, hundreds of thousands of people have used EFF’s free Certbot tool to spread that sweet HTTPS across the web. Now almost all internet traffic is encrypted, and everyone gets a basic level of security. Small actions mean big change when we act together. Will you support important work like this and give EFF a Year-End Challenge boost?

Give Today

Unlock Bonus Grants Before 2025

Make a donation of ANY SIZE by December 31 and you’ll help us unlock bonus grants! Every supporter gets us closer to a series of seven Year-End Challenge milestones set by EFF’s board of directors. These grants become larger as the number of online rights supporters grows. Everyone counts! See our progress.

🚧 Digital Rights: Under Construction 🚧

Since 1990, EFF has defended your digital privacy and free speech rights in the courts, through activism, and by making open source privacy tools. This team is committed to watching out for the users no matter what directions technological innovation may take us. And that’s funded entirely by donations.

Show your support for digital rights with free EFF member gear.

With help from people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; push the USPTO to withdraw harmful patent proposals; fight for the public's right to access police drone footage; and show why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet safety.

As technology’s reach continues to expand, so do everyone’s concerns about harmful side effects. That’s where EFF’s ample experience in tech policy, the law, and human rights shines. You can help us.

Donate to defend digital rights today and you’ll help us unlock bonus grants before the year ends.

Join EFF!

Proudly Member-Supported Since 1990

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

There’s No Copyright Exception to First Amendment Protections for Anonymous Speech

19 décembre 2024 à 11:22

Some people just can’t take a hint. Today’s perfect example is a group of independent movie distributors that have repeatedly tried, and failed, to force Reddit to give up the IP addresses of several users who posted about downloading movies. 

The distributors claim they need this information to support their copyright claims against internet service provider Frontier Communications, because it might be evidence that Frontier wasn’t enforcing its repeat infringer policy and therefore couldn’t claim safe harbor protections under the Digital Millennium. Copyright Act. Courts have repeatedly refused to enforce these subpoenas, recognizing the distributors couldn’t pass the test the First Amendment requires prior to unmasking anonymous speakers.  

Here's the twist: after the magistrate judge in this case applied this standard and quashed the subpoena, the movie distributors sought review from the district court judge assigned to the case. The second judge also denied discovery as unduly burdensome but, in a hearing on the matter, also said there was no First Amendment issue because the users were talking about copyright infringement. In their subsequent appeal to the Ninth Circuit, the distributors invite the appellate court to endorse the judge’s statement. 

As we explain in an amicus brief supporting Reddit, the court should refuse that invitation. Discussions about illegal activity clearly are protected speech. Indeed, the Supreme Court recently affirmed that even “advocacy of illegal acts” is “within the First Amendment’s core.” In fact, protecting such speech is a central purpose of the First Amendment because it ensures that people can robustly debate civil and criminal laws and advocate for change. 

There is no reason to imagine that this bedrock principle doesn’t apply just because the speech concerns copyright infringementespecially where the speakers aren’t even defendants in the case, but independent third parties. And unmasking Does in copyright cases carries particular risks given the long history of copyright claims being used as an excuse to take down lawful as well as infringing content online. 

We’re glad to see Reddit fighting back against these improper subpoenas, and proud to stand with the company as it stands up for its users. 

UK Politicians Join Organizations in Calling for Immediate Release of Alaa Abd El-Fattah

19 décembre 2024 à 07:06

As the UK’s Prime Minister Keir Starmer and Foreign Secretary David Lammy have failed to secure the release of British-Egyptian blogger, coder, and activist Alaa Abd El-Fattah, UK politicians call for tougher measures to secure Alaa’s immediate return to the UK.

During a debate on detained British nationals abroad in early December, chairwoman of the Commons Foreign Affairs Committee Emily Thornberry asked the House of Commons why the UK has continued to organize industry delegations to Cairo while “the Egyptian government have one of our citizens—Alaa Abd El-Fattah—wrongfully held in prison without consular access.”

In the same debate, Labour MP John McDonnell urged the introduction of a “moratorium on any new trade agreements with Egypt until Alaa is free,” which was supported by other politicians. Liberal Democrat MP Calum Miller also highlighted words from Alaa, who told his mother during a recent prison visit that he had “hope in David Lammy, but I just can’t believe nothing is happening...Now I think either I will die in here, or if my mother dies I will hold him to account.”

Alaa’s mother, mathematician Laila Soueif, has been on hunger strike for 79 days while she and the rest of his family have worked to engage the British government in securing Alaa’s release. On December 12, she also started protesting daily outside the Foreign Office and has since been joined by numerous MPs.

Support for Alaa has come from many directions. On December 6, 12 Nobel laureates wrote to Keir Starmer urging him to secure Alaa’s immediate release “Not only because Alaa is a British citizen, but to reanimate the commitment to intellectual sanctuary that made Britain a home for bold thinkers and visionaries for centuries.” The pressure on Labour’s senior politicians has continued throughout the month, with more than 100 MPs and peers writing to David Lammy on December 15 demanding Alaa’ be freed.   

Alaa should have been released on September 29, after serving his five-year sentence for sharing a Facebook post about a death in police custody, but Egyptian authorities have continued his imprisonment in contravention of the country’s own Criminal Procedure Code. British consular officials are prevented from visiting him in prison because the Egyptian government refuses to recognise Alaa’s British citizenship.

David Lammy met with Alaa’s family in November and promised to take action. But the UK’s Prime Minister failed to raise the case at the G20 Summit in Brazil when he met with Egypt’s President El-Sisi. 

If you’re based in the UK, here are some actions you can take to support the calls for Alaa’s release:

  1. Write to your MP (external link): https://freealaa.net/message-mp 
  2. Join Laila Soueif outside the Foreign Office daily between 10-11am
  3. Share Alaa’s plight on social media using the hashtag #freealaa

The UK Prime Minister and Foreign Secretary’s inaction is unacceptable. Every second counts, and time is running out. The government must do everything it can to ensure Alaa’s immediate and unconditional release.

❌
❌